2025-05-06 00:00:10.866091 | Job console starting... 2025-05-06 00:00:10.886594 | Updating repositories 2025-05-06 00:00:11.042000 | Preparing job workspace 2025-05-06 00:00:12.770359 | Running Ansible setup... 2025-05-06 00:00:21.118204 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-06 00:00:22.656445 | 2025-05-06 00:00:22.656575 | PLAY [Base pre] 2025-05-06 00:00:22.698195 | 2025-05-06 00:00:22.698369 | TASK [Setup log path fact] 2025-05-06 00:00:22.768593 | orchestrator | ok 2025-05-06 00:00:22.826510 | 2025-05-06 00:00:22.827436 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-06 00:00:22.957461 | orchestrator | ok 2025-05-06 00:00:22.987383 | 2025-05-06 00:00:22.987503 | TASK [emit-job-header : Print job information] 2025-05-06 00:00:23.118299 | # Job Information 2025-05-06 00:00:23.118457 | Ansible Version: 2.15.3 2025-05-06 00:00:23.118484 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-05-06 00:00:23.118510 | Pipeline: periodic-midnight 2025-05-06 00:00:23.118527 | Executor: 7d211f194f6a 2025-05-06 00:00:23.118542 | Triggered by: https://github.com/osism/testbed 2025-05-06 00:00:23.118557 | Event ID: 0ab604cd8a1748a5a3b6a0a14f4ab814 2025-05-06 00:00:23.136602 | 2025-05-06 00:00:23.136698 | LOOP [emit-job-header : Print node information] 2025-05-06 00:00:23.431062 | orchestrator | ok: 2025-05-06 00:00:23.431229 | orchestrator | # Node Information 2025-05-06 00:00:23.431257 | orchestrator | Inventory Hostname: orchestrator 2025-05-06 00:00:23.431277 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-06 00:00:23.431295 | orchestrator | Username: zuul-testbed05 2025-05-06 00:00:23.431312 | orchestrator | Distro: Debian 12.10 2025-05-06 00:00:23.431331 | orchestrator | Provider: static-testbed 2025-05-06 00:00:23.431348 | orchestrator | Label: testbed-orchestrator 2025-05-06 00:00:23.431365 | orchestrator | Product Name: OpenStack Nova 2025-05-06 00:00:23.431381 | orchestrator | Interface IP: 81.163.193.140 2025-05-06 00:00:23.459935 | 2025-05-06 00:00:23.460050 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-06 00:00:24.368068 | orchestrator -> localhost | changed 2025-05-06 00:00:24.375658 | 2025-05-06 00:00:24.375737 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-06 00:00:26.460183 | orchestrator -> localhost | changed 2025-05-06 00:00:26.488134 | 2025-05-06 00:00:26.488253 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-06 00:00:27.322711 | orchestrator -> localhost | ok 2025-05-06 00:00:27.332201 | 2025-05-06 00:00:27.332309 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-06 00:00:27.426256 | orchestrator | ok 2025-05-06 00:00:27.479330 | orchestrator | included: /var/lib/zuul/builds/e6b26d2a336d434bb99c7a10a0588d88/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-06 00:00:27.510586 | 2025-05-06 00:00:27.510696 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-06 00:00:28.955253 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-06 00:00:28.955447 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/e6b26d2a336d434bb99c7a10a0588d88/work/e6b26d2a336d434bb99c7a10a0588d88_id_rsa 2025-05-06 00:00:28.955483 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/e6b26d2a336d434bb99c7a10a0588d88/work/e6b26d2a336d434bb99c7a10a0588d88_id_rsa.pub 2025-05-06 00:00:28.955507 | orchestrator -> localhost | The key fingerprint is: 2025-05-06 00:00:28.955529 | orchestrator -> localhost | SHA256:CGK/0UIn+fgUNzuuA0U+J3TvIfPT3ltcp5VNNsV332k zuul-build-sshkey 2025-05-06 00:00:28.955552 | orchestrator -> localhost | The key's randomart image is: 2025-05-06 00:00:28.955573 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-06 00:00:28.955593 | orchestrator -> localhost | | ..| 2025-05-06 00:00:28.955613 | orchestrator -> localhost | | .o . =| 2025-05-06 00:00:28.955642 | orchestrator -> localhost | | o =+o.o. .O| 2025-05-06 00:00:28.955662 | orchestrator -> localhost | | . + B=++oo E*| 2025-05-06 00:00:28.955682 | orchestrator -> localhost | | =.=+S= o ..=| 2025-05-06 00:00:28.955701 | orchestrator -> localhost | | .* . .+ . +o| 2025-05-06 00:00:28.955729 | orchestrator -> localhost | | ... . o .. o| 2025-05-06 00:00:28.955760 | orchestrator -> localhost | | .. . .. | 2025-05-06 00:00:28.955781 | orchestrator -> localhost | | .. .. | 2025-05-06 00:00:28.955801 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-06 00:00:28.955855 | orchestrator -> localhost | ok: Runtime: 0:00:00.297793 2025-05-06 00:00:28.967139 | 2025-05-06 00:00:28.967247 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-06 00:00:29.024566 | orchestrator | ok 2025-05-06 00:00:29.095513 | orchestrator | included: /var/lib/zuul/builds/e6b26d2a336d434bb99c7a10a0588d88/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-06 00:00:29.145153 | 2025-05-06 00:00:29.145265 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-06 00:00:29.222845 | orchestrator | skipping: Conditional result was False 2025-05-06 00:00:29.231465 | 2025-05-06 00:00:29.231567 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-06 00:00:30.038109 | orchestrator | changed 2025-05-06 00:00:30.046380 | 2025-05-06 00:00:30.046490 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-06 00:00:30.355657 | orchestrator | ok 2025-05-06 00:00:30.372446 | 2025-05-06 00:00:30.372552 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-06 00:00:30.923311 | orchestrator | ok 2025-05-06 00:00:30.934193 | 2025-05-06 00:00:30.934297 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-06 00:00:31.429656 | orchestrator | ok 2025-05-06 00:00:31.447926 | 2025-05-06 00:00:31.448029 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-06 00:00:31.499474 | orchestrator | skipping: Conditional result was False 2025-05-06 00:00:31.507693 | 2025-05-06 00:00:31.507796 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-06 00:00:32.424602 | orchestrator -> localhost | changed 2025-05-06 00:00:32.438959 | 2025-05-06 00:00:32.439057 | TASK [add-build-sshkey : Add back temp key] 2025-05-06 00:00:32.901830 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/e6b26d2a336d434bb99c7a10a0588d88/work/e6b26d2a336d434bb99c7a10a0588d88_id_rsa (zuul-build-sshkey) 2025-05-06 00:00:32.902015 | orchestrator -> localhost | ok: Runtime: 0:00:00.031674 2025-05-06 00:00:32.909637 | 2025-05-06 00:00:32.909719 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-06 00:00:33.416575 | orchestrator | ok 2025-05-06 00:00:33.427445 | 2025-05-06 00:00:33.427537 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-06 00:00:33.466930 | orchestrator | skipping: Conditional result was False 2025-05-06 00:00:33.480915 | 2025-05-06 00:00:33.481021 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-06 00:00:33.938838 | orchestrator | ok 2025-05-06 00:00:33.974956 | 2025-05-06 00:00:33.975090 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-06 00:00:34.039073 | orchestrator | ok 2025-05-06 00:00:34.055619 | 2025-05-06 00:00:34.055730 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-06 00:00:34.654517 | orchestrator -> localhost | ok 2025-05-06 00:00:34.668262 | 2025-05-06 00:00:34.668380 | TASK [validate-host : Collect information about the host] 2025-05-06 00:00:36.064694 | orchestrator | ok 2025-05-06 00:00:36.086922 | 2025-05-06 00:00:36.087015 | TASK [validate-host : Sanitize hostname] 2025-05-06 00:00:36.146819 | orchestrator | ok 2025-05-06 00:00:36.155945 | 2025-05-06 00:00:36.156046 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-06 00:00:37.144168 | orchestrator -> localhost | changed 2025-05-06 00:00:37.150508 | 2025-05-06 00:00:37.150588 | TASK [validate-host : Collect information about zuul worker] 2025-05-06 00:00:37.818202 | orchestrator | ok 2025-05-06 00:00:37.823963 | 2025-05-06 00:00:37.824048 | TASK [validate-host : Write out all zuul information for each host] 2025-05-06 00:00:38.455461 | orchestrator -> localhost | changed 2025-05-06 00:00:38.477247 | 2025-05-06 00:00:38.477340 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-06 00:00:38.802278 | orchestrator | ok 2025-05-06 00:00:38.808537 | 2025-05-06 00:00:38.808617 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-06 00:00:57.455000 | orchestrator | changed: 2025-05-06 00:00:57.455282 | orchestrator | .d..t...... src/ 2025-05-06 00:00:57.455324 | orchestrator | .d..t...... src/github.com/ 2025-05-06 00:00:57.455352 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-06 00:00:57.455374 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-06 00:00:57.455395 | orchestrator | RedHat.yml 2025-05-06 00:00:57.474473 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-06 00:00:57.474490 | orchestrator | RedHat.yml 2025-05-06 00:00:57.474836 | orchestrator | = 1.53.0"... 2025-05-06 00:01:10.451773 | orchestrator | 00:01:10.451 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-05-06 00:01:10.529174 | orchestrator | 00:01:10.528 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-05-06 00:01:11.904456 | orchestrator | 00:01:11.904 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-05-06 00:01:12.650630 | orchestrator | 00:01:12.650 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-05-06 00:01:13.666544 | orchestrator | 00:01:13.666 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-06 00:01:14.440862 | orchestrator | 00:01:14.440 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-06 00:01:15.851278 | orchestrator | 00:01:15.850 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-05-06 00:01:16.977791 | orchestrator | 00:01:16.977 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-05-06 00:01:16.977986 | orchestrator | 00:01:16.977 STDOUT terraform: Providers are signed by their developers. 2025-05-06 00:01:16.978080 | orchestrator | 00:01:16.977 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-06 00:01:16.978281 | orchestrator | 00:01:16.977 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-06 00:01:16.978304 | orchestrator | 00:01:16.977 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-06 00:01:16.978326 | orchestrator | 00:01:16.977 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-06 00:01:16.978414 | orchestrator | 00:01:16.978 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-06 00:01:16.978495 | orchestrator | 00:01:16.978 STDOUT terraform: you run "tofu init" in the future. 2025-05-06 00:01:16.978593 | orchestrator | 00:01:16.978 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-06 00:01:16.978729 | orchestrator | 00:01:16.978 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-06 00:01:16.978871 | orchestrator | 00:01:16.978 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-06 00:01:16.978910 | orchestrator | 00:01:16.978 STDOUT terraform: should now work. 2025-05-06 00:01:16.979057 | orchestrator | 00:01:16.978 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-06 00:01:16.979188 | orchestrator | 00:01:16.979 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-06 00:01:16.979309 | orchestrator | 00:01:16.979 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-06 00:01:17.188114 | orchestrator | 00:01:17.187 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-05-06 00:01:17.390098 | orchestrator | 00:01:17.389 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-06 00:01:17.390194 | orchestrator | 00:01:17.390 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-06 00:01:17.390339 | orchestrator | 00:01:17.390 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-06 00:01:17.390376 | orchestrator | 00:01:17.390 STDOUT terraform: for this configuration. 2025-05-06 00:01:17.611023 | orchestrator | 00:01:17.610 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-05-06 00:01:17.717824 | orchestrator | 00:01:17.717 STDOUT terraform: ci.auto.tfvars 2025-05-06 00:01:17.948681 | orchestrator | 00:01:17.948 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-05-06 00:01:18.917340 | orchestrator | 00:01:18.917 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-06 00:01:19.446593 | orchestrator | 00:01:19.446 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-06 00:01:19.664207 | orchestrator | 00:01:19.663 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-06 00:01:19.664291 | orchestrator | 00:01:19.664 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-06 00:01:19.664302 | orchestrator | 00:01:19.664 STDOUT terraform:  + create 2025-05-06 00:01:19.664395 | orchestrator | 00:01:19.664 STDOUT terraform:  <= read (data resources) 2025-05-06 00:01:19.664416 | orchestrator | 00:01:19.664 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-06 00:01:19.664434 | orchestrator | 00:01:19.664 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-06 00:01:19.664469 | orchestrator | 00:01:19.664 STDOUT terraform:  # (config refers to values not yet known) 2025-05-06 00:01:19.664543 | orchestrator | 00:01:19.664 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-06 00:01:19.664596 | orchestrator | 00:01:19.664 STDOUT terraform:  + checksum = (known after apply) 2025-05-06 00:01:19.664644 | orchestrator | 00:01:19.664 STDOUT terraform:  + created_at = (known after apply) 2025-05-06 00:01:19.664708 | orchestrator | 00:01:19.664 STDOUT terraform:  + file = (known after apply) 2025-05-06 00:01:19.664757 | orchestrator | 00:01:19.664 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.664814 | orchestrator | 00:01:19.664 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.664877 | orchestrator | 00:01:19.664 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-06 00:01:19.664923 | orchestrator | 00:01:19.664 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-06 00:01:19.664965 | orchestrator | 00:01:19.664 STDOUT terraform:  + most_recent = true 2025-05-06 00:01:19.665010 | orchestrator | 00:01:19.664 STDOUT terraform:  + name = (known after apply) 2025-05-06 00:01:19.665065 | orchestrator | 00:01:19.665 STDOUT terraform:  + protected = (known after apply) 2025-05-06 00:01:19.665135 | orchestrator | 00:01:19.665 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.665173 | orchestrator | 00:01:19.665 STDOUT terraform:  + schema = (known after apply) 2025-05-06 00:01:19.665227 | orchestrator | 00:01:19.665 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-06 00:01:19.665303 | orchestrator | 00:01:19.665 STDOUT terraform:  + tags = (known after apply) 2025-05-06 00:01:19.665346 | orchestrator | 00:01:19.665 STDOUT terraform:  + updated_at = (known after apply) 2025-05-06 00:01:19.665373 | orchestrator | 00:01:19.665 STDOUT terraform:  } 2025-05-06 00:01:19.665689 | orchestrator | 00:01:19.665 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-06 00:01:19.665786 | orchestrator | 00:01:19.665 STDOUT terraform:  # (config refers to values not yet known) 2025-05-06 00:01:19.665806 | orchestrator | 00:01:19.665 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-06 00:01:19.665826 | orchestrator | 00:01:19.665 STDOUT terraform:  + checksum = (known after apply) 2025-05-06 00:01:19.665846 | orchestrator | 00:01:19.665 STDOUT terraform:  + created_at = (known after apply) 2025-05-06 00:01:19.665861 | orchestrator | 00:01:19.665 STDOUT terraform:  + file = (known after apply) 2025-05-06 00:01:19.665875 | orchestrator | 00:01:19.665 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.665888 | orchestrator | 00:01:19.665 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.665906 | orchestrator | 00:01:19.665 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-06 00:01:19.665953 | orchestrator | 00:01:19.665 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-06 00:01:19.665973 | orchestrator | 00:01:19.665 STDOUT terraform:  + most_recent = true 2025-05-06 00:01:19.666052 | orchestrator | 00:01:19.665 STDOUT terraform:  + name = (known after apply) 2025-05-06 00:01:19.666075 | orchestrator | 00:01:19.665 STDOUT terraform:  + protected = (known after apply) 2025-05-06 00:01:19.666128 | orchestrator | 00:01:19.666 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.666227 | orchestrator | 00:01:19.666 STDOUT terraform:  + schema = (known after apply) 2025-05-06 00:01:19.666382 | orchestrator | 00:01:19.666 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-06 00:01:19.666405 | orchestrator | 00:01:19.666 STDOUT terraform:  + tags = (known after apply) 2025-05-06 00:01:19.666414 | orchestrator | 00:01:19.666 STDOUT terraform:  + updated_at = (known after apply) 2025-05-06 00:01:19.666475 | orchestrator | 00:01:19.666 STDOUT terraform:  } 2025-05-06 00:01:19.666484 | orchestrator | 00:01:19.666 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-06 00:01:19.666549 | orchestrator | 00:01:19.666 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-06 00:01:19.666600 | orchestrator | 00:01:19.666 STDOUT terraform:  + content = (known after apply) 2025-05-06 00:01:19.666666 | orchestrator | 00:01:19.666 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-06 00:01:19.666735 | orchestrator | 00:01:19.666 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-06 00:01:19.666801 | orchestrator | 00:01:19.666 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-06 00:01:19.666880 | orchestrator | 00:01:19.666 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-06 00:01:19.666934 | orchestrator | 00:01:19.666 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-06 00:01:19.667000 | orchestrator | 00:01:19.666 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-06 00:01:19.667044 | orchestrator | 00:01:19.666 STDOUT terraform:  + directory_permission = "0777" 2025-05-06 00:01:19.667090 | orchestrator | 00:01:19.667 STDOUT terraform:  + file_permission = "0644" 2025-05-06 00:01:19.667158 | orchestrator | 00:01:19.667 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-06 00:01:19.667224 | orchestrator | 00:01:19.667 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.667249 | orchestrator | 00:01:19.667 STDOUT terraform:  } 2025-05-06 00:01:19.667359 | orchestrator | 00:01:19.667 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-06 00:01:19.667400 | orchestrator | 00:01:19.667 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-06 00:01:19.667484 | orchestrator | 00:01:19.667 STDOUT terraform:  + content = (known after apply) 2025-05-06 00:01:19.667553 | orchestrator | 00:01:19.667 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-06 00:01:19.667627 | orchestrator | 00:01:19.667 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-06 00:01:19.667684 | orchestrator | 00:01:19.667 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-06 00:01:19.667751 | orchestrator | 00:01:19.667 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-06 00:01:19.667815 | orchestrator | 00:01:19.667 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-06 00:01:19.667880 | orchestrator | 00:01:19.667 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-06 00:01:19.667932 | orchestrator | 00:01:19.667 STDOUT terraform:  + directory_permission = "0777" 2025-05-06 00:01:19.668004 | orchestrator | 00:01:19.667 STDOUT terraform:  + file_permission = "0644" 2025-05-06 00:01:19.668086 | orchestrator | 00:01:19.668 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-06 00:01:19.668182 | orchestrator | 00:01:19.668 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.668222 | orchestrator | 00:01:19.668 STDOUT terraform:  } 2025-05-06 00:01:19.668293 | orchestrator | 00:01:19.668 STDOUT terraform:  # local_file.inventory will be created 2025-05-06 00:01:19.668376 | orchestrator | 00:01:19.668 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-06 00:01:19.668484 | orchestrator | 00:01:19.668 STDOUT terraform:  + content = (known after apply) 2025-05-06 00:01:19.668582 | orchestrator | 00:01:19.668 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-06 00:01:19.668710 | orchestrator | 00:01:19.668 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-06 00:01:19.668795 | orchestrator | 00:01:19.668 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-06 00:01:19.668849 | orchestrator | 00:01:19.668 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-06 00:01:19.668917 | orchestrator | 00:01:19.668 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-06 00:01:19.668982 | orchestrator | 00:01:19.668 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-06 00:01:19.669035 | orchestrator | 00:01:19.668 STDOUT terraform:  + directory_permission = "0777" 2025-05-06 00:01:19.669079 | orchestrator | 00:01:19.669 STDOUT terraform:  + file_permission = "0644" 2025-05-06 00:01:19.669136 | orchestrator | 00:01:19.669 STDOUT terraform:  + filename = "inventory.ci" 2025-05-06 00:01:19.669197 | orchestrator | 00:01:19.669 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.669239 | orchestrator | 00:01:19.669 STDOUT terraform:  } 2025-05-06 00:01:19.669277 | orchestrator | 00:01:19.669 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-06 00:01:19.669334 | orchestrator | 00:01:19.669 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-06 00:01:19.669394 | orchestrator | 00:01:19.669 STDOUT terraform:  + content = (sensitive value) 2025-05-06 00:01:19.669487 | orchestrator | 00:01:19.669 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-06 00:01:19.669544 | orchestrator | 00:01:19.669 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-06 00:01:19.669610 | orchestrator | 00:01:19.669 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-06 00:01:19.669680 | orchestrator | 00:01:19.669 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-06 00:01:19.669740 | orchestrator | 00:01:19.669 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-06 00:01:19.669807 | orchestrator | 00:01:19.669 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-06 00:01:19.669853 | orchestrator | 00:01:19.669 STDOUT terraform:  + directory_permission = "0700" 2025-05-06 00:01:19.669897 | orchestrator | 00:01:19.669 STDOUT terraform:  + file_permission = "0600" 2025-05-06 00:01:19.669953 | orchestrator | 00:01:19.669 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-06 00:01:19.670029 | orchestrator | 00:01:19.669 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.670076 | orchestrator | 00:01:19.670 STDOUT terraform:  } 2025-05-06 00:01:19.670133 | orchestrator | 00:01:19.670 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-06 00:01:19.670191 | orchestrator | 00:01:19.670 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-06 00:01:19.670232 | orchestrator | 00:01:19.670 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.670259 | orchestrator | 00:01:19.670 STDOUT terraform:  } 2025-05-06 00:01:19.670350 | orchestrator | 00:01:19.670 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-06 00:01:19.670482 | orchestrator | 00:01:19.670 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-06 00:01:19.670533 | orchestrator | 00:01:19.670 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.670573 | orchestrator | 00:01:19.670 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.670631 | orchestrator | 00:01:19.670 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.670694 | orchestrator | 00:01:19.670 STDOUT terraform:  + image_id = (known after apply) 2025-05-06 00:01:19.670743 | orchestrator | 00:01:19.670 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.670810 | orchestrator | 00:01:19.670 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-06 00:01:19.670864 | orchestrator | 00:01:19.670 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.670902 | orchestrator | 00:01:19.670 STDOUT terraform:  + size = 80 2025-05-06 00:01:19.670938 | orchestrator | 00:01:19.670 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.670961 | orchestrator | 00:01:19.670 STDOUT terraform:  } 2025-05-06 00:01:19.671041 | orchestrator | 00:01:19.670 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-06 00:01:19.671121 | orchestrator | 00:01:19.671 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-06 00:01:19.671173 | orchestrator | 00:01:19.671 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.671208 | orchestrator | 00:01:19.671 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.671262 | orchestrator | 00:01:19.671 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.671314 | orchestrator | 00:01:19.671 STDOUT terraform:  + image_id = (known after apply) 2025-05-06 00:01:19.671367 | orchestrator | 00:01:19.671 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.671446 | orchestrator | 00:01:19.671 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-06 00:01:19.671498 | orchestrator | 00:01:19.671 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.671535 | orchestrator | 00:01:19.671 STDOUT terraform:  + size = 80 2025-05-06 00:01:19.671571 | orchestrator | 00:01:19.671 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.671593 | orchestrator | 00:01:19.671 STDOUT terraform:  } 2025-05-06 00:01:19.671671 | orchestrator | 00:01:19.671 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-06 00:01:19.671749 | orchestrator | 00:01:19.671 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-06 00:01:19.671804 | orchestrator | 00:01:19.671 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.671840 | orchestrator | 00:01:19.671 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.671893 | orchestrator | 00:01:19.671 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.671949 | orchestrator | 00:01:19.671 STDOUT terraform:  + image_id = (known after apply) 2025-05-06 00:01:19.672002 | orchestrator | 00:01:19.671 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.672072 | orchestrator | 00:01:19.671 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-06 00:01:19.672127 | orchestrator | 00:01:19.672 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.672161 | orchestrator | 00:01:19.672 STDOUT terraform:  + size = 80 2025-05-06 00:01:19.672198 | orchestrator | 00:01:19.672 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.672217 | orchestrator | 00:01:19.672 STDOUT terraform:  } 2025-05-06 00:01:19.672297 | orchestrator | 00:01:19.672 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-06 00:01:19.672405 | orchestrator | 00:01:19.672 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-06 00:01:19.672545 | orchestrator | 00:01:19.672 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.672577 | orchestrator | 00:01:19.672 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.672623 | orchestrator | 00:01:19.672 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.672673 | orchestrator | 00:01:19.672 STDOUT terraform:  + image_id = (known after apply) 2025-05-06 00:01:19.672718 | orchestrator | 00:01:19.672 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.672779 | orchestrator | 00:01:19.672 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-06 00:01:19.672826 | orchestrator | 00:01:19.672 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.672858 | orchestrator | 00:01:19.672 STDOUT terraform:  + size = 80 2025-05-06 00:01:19.672890 | orchestrator | 00:01:19.672 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.672914 | orchestrator | 00:01:19.672 STDOUT terraform:  } 2025-05-06 00:01:19.672982 | orchestrator | 00:01:19.672 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-06 00:01:19.673052 | orchestrator | 00:01:19.672 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-06 00:01:19.673096 | orchestrator | 00:01:19.673 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.673128 | orchestrator | 00:01:19.673 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.673176 | orchestrator | 00:01:19.673 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.673224 | orchestrator | 00:01:19.673 STDOUT terraform:  + image_id = (known after apply) 2025-05-06 00:01:19.673273 | orchestrator | 00:01:19.673 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.673332 | orchestrator | 00:01:19.673 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-06 00:01:19.673377 | orchestrator | 00:01:19.673 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.673408 | orchestrator | 00:01:19.673 STDOUT terraform:  + size = 80 2025-05-06 00:01:19.673452 | orchestrator | 00:01:19.673 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.673460 | orchestrator | 00:01:19.673 STDOUT terraform:  } 2025-05-06 00:01:19.673537 | orchestrator | 00:01:19.673 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-06 00:01:19.673604 | orchestrator | 00:01:19.673 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-06 00:01:19.673649 | orchestrator | 00:01:19.673 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.673681 | orchestrator | 00:01:19.673 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.673729 | orchestrator | 00:01:19.673 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.673776 | orchestrator | 00:01:19.673 STDOUT terraform:  + image_id = (known after apply) 2025-05-06 00:01:19.673823 | orchestrator | 00:01:19.673 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.673882 | orchestrator | 00:01:19.673 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-06 00:01:19.673929 | orchestrator | 00:01:19.673 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.673961 | orchestrator | 00:01:19.673 STDOUT terraform:  + size = 80 2025-05-06 00:01:19.673993 | orchestrator | 00:01:19.673 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.674011 | orchestrator | 00:01:19.673 STDOUT terraform:  } 2025-05-06 00:01:19.674096 | orchestrator | 00:01:19.674 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-06 00:01:19.674166 | orchestrator | 00:01:19.674 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-06 00:01:19.674213 | orchestrator | 00:01:19.674 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.674247 | orchestrator | 00:01:19.674 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.674296 | orchestrator | 00:01:19.674 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.674342 | orchestrator | 00:01:19.674 STDOUT terraform:  + image_id = (known after apply) 2025-05-06 00:01:19.674388 | orchestrator | 00:01:19.674 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.674473 | orchestrator | 00:01:19.674 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-06 00:01:19.674512 | orchestrator | 00:01:19.674 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.674544 | orchestrator | 00:01:19.674 STDOUT terraform:  + size = 80 2025-05-06 00:01:19.674578 | orchestrator | 00:01:19.674 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.674598 | orchestrator | 00:01:19.674 STDOUT terraform:  } 2025-05-06 00:01:19.674670 | orchestrator | 00:01:19.674 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-06 00:01:19.674739 | orchestrator | 00:01:19.674 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-06 00:01:19.674784 | orchestrator | 00:01:19.674 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.674817 | orchestrator | 00:01:19.674 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.674865 | orchestrator | 00:01:19.674 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.674909 | orchestrator | 00:01:19.674 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.674965 | orchestrator | 00:01:19.674 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-05-06 00:01:19.675010 | orchestrator | 00:01:19.674 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.675042 | orchestrator | 00:01:19.675 STDOUT terraform:  + size = 20 2025-05-06 00:01:19.675073 | orchestrator | 00:01:19.675 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.675081 | orchestrator | 00:01:19.675 STDOUT terraform:  } 2025-05-06 00:01:19.675155 | orchestrator | 00:01:19.675 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-06 00:01:19.675219 | orchestrator | 00:01:19.675 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-06 00:01:19.675266 | orchestrator | 00:01:19.675 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.675299 | orchestrator | 00:01:19.675 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.675344 | orchestrator | 00:01:19.675 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.675390 | orchestrator | 00:01:19.675 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.675460 | orchestrator | 00:01:19.675 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-05-06 00:01:19.675506 | orchestrator | 00:01:19.675 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.675537 | orchestrator | 00:01:19.675 STDOUT terraform:  + size = 20 2025-05-06 00:01:19.675567 | orchestrator | 00:01:19.675 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.675586 | orchestrator | 00:01:19.675 STDOUT terraform:  } 2025-05-06 00:01:19.675651 | orchestrator | 00:01:19.675 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-06 00:01:19.675719 | orchestrator | 00:01:19.675 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-06 00:01:19.675764 | orchestrator | 00:01:19.675 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.675795 | orchestrator | 00:01:19.675 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.675842 | orchestrator | 00:01:19.675 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.675888 | orchestrator | 00:01:19.675 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.675944 | orchestrator | 00:01:19.675 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-05-06 00:01:19.675992 | orchestrator | 00:01:19.675 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.676022 | orchestrator | 00:01:19.675 STDOUT terraform:  + size = 20 2025-05-06 00:01:19.676054 | orchestrator | 00:01:19.676 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.676071 | orchestrator | 00:01:19.676 STDOUT terraform:  } 2025-05-06 00:01:19.676137 | orchestrator | 00:01:19.676 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-06 00:01:19.676202 | orchestrator | 00:01:19.676 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-06 00:01:19.676249 | orchestrator | 00:01:19.676 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.676280 | orchestrator | 00:01:19.676 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.676326 | orchestrator | 00:01:19.676 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.676373 | orchestrator | 00:01:19.676 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.676445 | orchestrator | 00:01:19.676 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-06 00:01:19.676499 | orchestrator | 00:01:19.676 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.676531 | orchestrator | 00:01:19.676 STDOUT terraform:  + size = 20 2025-05-06 00:01:19.676566 | orchestrator | 00:01:19.676 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.676590 | orchestrator | 00:01:19.676 STDOUT terraform:  } 2025-05-06 00:01:19.676658 | orchestrator | 00:01:19.676 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-06 00:01:19.676721 | orchestrator | 00:01:19.676 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-06 00:01:19.676769 | orchestrator | 00:01:19.676 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.676800 | orchestrator | 00:01:19.676 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.676846 | orchestrator | 00:01:19.676 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.676892 | orchestrator | 00:01:19.676 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.676951 | orchestrator | 00:01:19.676 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-06 00:01:19.676996 | orchestrator | 00:01:19.676 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.677025 | orchestrator | 00:01:19.676 STDOUT terraform:  + size = 20 2025-05-06 00:01:19.677053 | orchestrator | 00:01:19.677 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.677061 | orchestrator | 00:01:19.677 STDOUT terraform:  } 2025-05-06 00:01:19.677125 | orchestrator | 00:01:19.677 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-06 00:01:19.677183 | orchestrator | 00:01:19.677 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-06 00:01:19.677223 | orchestrator | 00:01:19.677 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.677250 | orchestrator | 00:01:19.677 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.677294 | orchestrator | 00:01:19.677 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.677336 | orchestrator | 00:01:19.677 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.677384 | orchestrator | 00:01:19.677 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-06 00:01:19.677441 | orchestrator | 00:01:19.677 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.677596 | orchestrator | 00:01:19.677 STDOUT terraform:  + size = 20 2025-05-06 00:01:19.677668 | orchestrator | 00:01:19.677 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.677684 | orchestrator | 00:01:19.677 STDOUT terraform:  } 2025-05-06 00:01:19.677699 | orchestrator | 00:01:19.677 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-06 00:01:19.677718 | orchestrator | 00:01:19.677 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-06 00:01:19.677733 | orchestrator | 00:01:19.677 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.677769 | orchestrator | 00:01:19.677 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.677788 | orchestrator | 00:01:19.677 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.677820 | orchestrator | 00:01:19.677 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.677838 | orchestrator | 00:01:19.677 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-05-06 00:01:19.677856 | orchestrator | 00:01:19.677 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.677887 | orchestrator | 00:01:19.677 STDOUT terraform:  + size = 20 2025-05-06 00:01:19.677916 | orchestrator | 00:01:19.677 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.677994 | orchestrator | 00:01:19.677 STDOUT terraform:  } 2025-05-06 00:01:19.678042 | orchestrator | 00:01:19.677 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-06 00:01:19.678082 | orchestrator | 00:01:19.677 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-06 00:01:19.678111 | orchestrator | 00:01:19.678 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.678130 | orchestrator | 00:01:19.678 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.678161 | orchestrator | 00:01:19.678 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.678207 | orchestrator | 00:01:19.678 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.678257 | orchestrator | 00:01:19.678 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-05-06 00:01:19.678300 | orchestrator | 00:01:19.678 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.678319 | orchestrator | 00:01:19.678 STDOUT terraform:  + size = 20 2025-05-06 00:01:19.678350 | orchestrator | 00:01:19.678 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.678411 | orchestrator | 00:01:19.678 STDOUT terraform:  } 2025-05-06 00:01:19.678452 | orchestrator | 00:01:19.678 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-06 00:01:19.678500 | orchestrator | 00:01:19.678 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-06 00:01:19.678531 | orchestrator | 00:01:19.678 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.678558 | orchestrator | 00:01:19.678 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.678608 | orchestrator | 00:01:19.678 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.678650 | orchestrator | 00:01:19.678 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.678700 | orchestrator | 00:01:19.678 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-05-06 00:01:19.678742 | orchestrator | 00:01:19.678 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.678761 | orchestrator | 00:01:19.678 STDOUT terraform:  + size = 20 2025-05-06 00:01:19.678779 | orchestrator | 00:01:19.678 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.678797 | orchestrator | 00:01:19.678 STDOUT terraform:  } 2025-05-06 00:01:19.678861 | orchestrator | 00:01:19.678 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-05-06 00:01:19.678921 | orchestrator | 00:01:19.678 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-06 00:01:19.678962 | orchestrator | 00:01:19.678 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.678981 | orchestrator | 00:01:19.678 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.679019 | orchestrator | 00:01:19.678 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.679060 | orchestrator | 00:01:19.679 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.679107 | orchestrator | 00:01:19.679 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-05-06 00:01:19.679147 | orchestrator | 00:01:19.679 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.679165 | orchestrator | 00:01:19.679 STDOUT terraform:  + size = 20 2025-05-06 00:01:19.679183 | orchestrator | 00:01:19.679 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.679200 | orchestrator | 00:01:19.679 STDOUT terraform:  } 2025-05-06 00:01:19.679275 | orchestrator | 00:01:19.679 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-05-06 00:01:19.679338 | orchestrator | 00:01:19.679 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-06 00:01:19.679378 | orchestrator | 00:01:19.679 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.679397 | orchestrator | 00:01:19.679 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.679454 | orchestrator | 00:01:19.679 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.679492 | orchestrator | 00:01:19.679 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.679540 | orchestrator | 00:01:19.679 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-05-06 00:01:19.679570 | orchestrator | 00:01:19.679 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.679588 | orchestrator | 00:01:19.679 STDOUT terraform:  + size = 20 2025-05-06 00:01:19.679616 | orchestrator | 00:01:19.679 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.679633 | orchestrator | 00:01:19.679 STDOUT terraform:  } 2025-05-06 00:01:19.679691 | orchestrator | 00:01:19.679 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-05-06 00:01:19.679746 | orchestrator | 00:01:19.679 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-06 00:01:19.679776 | orchestrator | 00:01:19.679 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.679794 | orchestrator | 00:01:19.679 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.679920 | orchestrator | 00:01:19.679 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.680035 | orchestrator | 00:01:19.679 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.680054 | orchestrator | 00:01:19.679 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-05-06 00:01:19.680082 | orchestrator | 00:01:19.680 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.680099 | orchestrator | 00:01:19.680 STDOUT terraform:  + size = 20 2025-05-06 00:01:19.680135 | orchestrator | 00:01:19.680 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.680153 | orchestrator | 00:01:19.680 STDOUT terraform:  } 2025-05-06 00:01:19.680208 | orchestrator | 00:01:19.680 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-05-06 00:01:19.680252 | orchestrator | 00:01:19.680 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-06 00:01:19.680271 | orchestrator | 00:01:19.680 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.680289 | orchestrator | 00:01:19.680 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.680306 | orchestrator | 00:01:19.680 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.680358 | orchestrator | 00:01:19.680 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.680377 | orchestrator | 00:01:19.680 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-05-06 00:01:19.680461 | orchestrator | 00:01:19.680 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.680479 | orchestrator | 00:01:19.680 STDOUT terraform:  + size = 20 2025-05-06 00:01:19.680498 | orchestrator | 00:01:19.680 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.680512 | orchestrator | 00:01:19.680 STDOUT terraform:  } 2025-05-06 00:01:19.680529 | orchestrator | 00:01:19.680 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-05-06 00:01:19.680546 | orchestrator | 00:01:19.680 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-06 00:01:19.680589 | orchestrator | 00:01:19.680 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.680630 | orchestrator | 00:01:19.680 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.680649 | orchestrator | 00:01:19.680 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.680697 | orchestrator | 00:01:19.680 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.680716 | orchestrator | 00:01:19.680 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-05-06 00:01:19.680731 | orchestrator | 00:01:19.680 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.680748 | orchestrator | 00:01:19.680 STDOUT terraform:  + size = 20 2025-05-06 00:01:19.680762 | orchestrator | 00:01:19.680 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.680780 | orchestrator | 00:01:19.680 STDOUT terraform:  } 2025-05-06 00:01:19.680904 | orchestrator | 00:01:19.680 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-05-06 00:01:19.680928 | orchestrator | 00:01:19.680 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-06 00:01:19.680936 | orchestrator | 00:01:19.680 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.687135 | orchestrator | 00:01:19.680 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.687228 | orchestrator | 00:01:19.680 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.687240 | orchestrator | 00:01:19.680 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.687247 | orchestrator | 00:01:19.681 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-05-06 00:01:19.687253 | orchestrator | 00:01:19.681 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.687260 | orchestrator | 00:01:19.681 STDOUT terraform:  + size = 20 2025-05-06 00:01:19.687267 | orchestrator | 00:01:19.681 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.687274 | orchestrator | 00:01:19.681 STDOUT terraform:  } 2025-05-06 00:01:19.687281 | orchestrator | 00:01:19.681 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-05-06 00:01:19.687288 | orchestrator | 00:01:19.681 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-06 00:01:19.687294 | orchestrator | 00:01:19.681 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.687301 | orchestrator | 00:01:19.681 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.687307 | orchestrator | 00:01:19.681 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.687316 | orchestrator | 00:01:19.681 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.687323 | orchestrator | 00:01:19.681 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-05-06 00:01:19.687329 | orchestrator | 00:01:19.681 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.687336 | orchestrator | 00:01:19.681 STDOUT terraform:  + size = 20 2025-05-06 00:01:19.687342 | orchestrator | 00:01:19.681 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.687349 | orchestrator | 00:01:19.681 STDOUT terraform:  } 2025-05-06 00:01:19.687358 | orchestrator | 00:01:19.681 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-05-06 00:01:19.687365 | orchestrator | 00:01:19.681 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-06 00:01:19.687372 | orchestrator | 00:01:19.681 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.687378 | orchestrator | 00:01:19.681 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.687384 | orchestrator | 00:01:19.681 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.687391 | orchestrator | 00:01:19.681 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.687397 | orchestrator | 00:01:19.681 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-05-06 00:01:19.687403 | orchestrator | 00:01:19.681 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.687410 | orchestrator | 00:01:19.681 STDOUT terraform:  + size = 20 2025-05-06 00:01:19.687416 | orchestrator | 00:01:19.681 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.687452 | orchestrator | 00:01:19.681 STDOUT terraform:  } 2025-05-06 00:01:19.687459 | orchestrator | 00:01:19.681 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-05-06 00:01:19.687470 | orchestrator | 00:01:19.681 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-06 00:01:19.687477 | orchestrator | 00:01:19.681 STDOUT terraform:  + attachment = (known after apply) 2025-05-06 00:01:19.687483 | orchestrator | 00:01:19.681 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.687490 | orchestrator | 00:01:19.682 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.687496 | orchestrator | 00:01:19.682 STDOUT terraform:  + metadata = (known after apply) 2025-05-06 00:01:19.687502 | orchestrator | 00:01:19.682 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-05-06 00:01:19.687508 | orchestrator | 00:01:19.682 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.687519 | orchestrator | 00:01:19.682 STDOUT terraform:  + size = 20 2025-05-06 00:01:19.687526 | orchestrator | 00:01:19.682 STDOUT terraform:  + volume_type = "ssd" 2025-05-06 00:01:19.687533 | orchestrator | 00:01:19.682 STDOUT terraform:  } 2025-05-06 00:01:19.687541 | orchestrator | 00:01:19.682 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-06 00:01:19.687547 | orchestrator | 00:01:19.682 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-06 00:01:19.687553 | orchestrator | 00:01:19.682 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-06 00:01:19.687560 | orchestrator | 00:01:19.682 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-06 00:01:19.687566 | orchestrator | 00:01:19.682 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-06 00:01:19.687572 | orchestrator | 00:01:19.682 STDOUT terraform:  + all_tags = (known after apply) 2025-05-06 00:01:19.687579 | orchestrator | 00:01:19.682 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.687586 | orchestrator | 00:01:19.682 STDOUT terraform:  + config_drive = true 2025-05-06 00:01:19.687592 | orchestrator | 00:01:19.682 STDOUT terraform:  + created = (known after apply) 2025-05-06 00:01:19.687598 | orchestrator | 00:01:19.682 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-06 00:01:19.687605 | orchestrator | 00:01:19.682 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-06 00:01:19.687612 | orchestrator | 00:01:19.682 STDOUT terraform:  + force_delete = false 2025-05-06 00:01:19.687618 | orchestrator | 00:01:19.682 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.687624 | orchestrator | 00:01:19.682 STDOUT terraform:  + image_id = (known after apply) 2025-05-06 00:01:19.687631 | orchestrator | 00:01:19.682 STDOUT terraform:  + image_name = (known after apply) 2025-05-06 00:01:19.687638 | orchestrator | 00:01:19.682 STDOUT terraform:  + key_pair = "testbed" 2025-05-06 00:01:19.687644 | orchestrator | 00:01:19.682 STDOUT terraform:  + name = "testbed-manager" 2025-05-06 00:01:19.687650 | orchestrator | 00:01:19.682 STDOUT terraform:  + power_state = "active" 2025-05-06 00:01:19.687657 | orchestrator | 00:01:19.682 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.687663 | orchestrator | 00:01:19.682 STDOUT terraform:  + security_groups = (known after apply) 2025-05-06 00:01:19.687673 | orchestrator | 00:01:19.682 STDOUT terraform:  + stop_before_destroy = false 2025-05-06 00:01:19.687679 | orchestrator | 00:01:19.682 STDOUT terraform:  + updated = (known after apply) 2025-05-06 00:01:19.687687 | orchestrator | 00:01:19.682 STDOUT terraform:  + user_data = (known after apply) 2025-05-06 00:01:19.687694 | orchestrator | 00:01:19.682 STDOUT terraform:  + block_device { 2025-05-06 00:01:19.687701 | orchestrator | 00:01:19.682 STDOUT terraform:  + boot_index = 0 2025-05-06 00:01:19.687707 | orchestrator | 00:01:19.682 STDOUT terraform:  + delete_on_termination = false 2025-05-06 00:01:19.687713 | orchestrator | 00:01:19.682 STDOUT terraform:  + destination_type = "volume" 2025-05-06 00:01:19.687719 | orchestrator | 00:01:19.682 STDOUT terraform:  + multiattach = false 2025-05-06 00:01:19.687725 | orchestrator | 00:01:19.682 STDOUT terraform:  + source_type = "volume" 2025-05-06 00:01:19.687732 | orchestrator | 00:01:19.683 STDOUT terraform:  + uuid = (known after apply) 2025-05-06 00:01:19.687742 | orchestrator | 00:01:19.683 STDOUT terraform:  } 2025-05-06 00:01:19.687748 | orchestrator | 00:01:19.683 STDOUT terraform:  + network { 2025-05-06 00:01:19.687755 | orchestrator | 00:01:19.683 STDOUT terraform:  + access_network = false 2025-05-06 00:01:19.687762 | orchestrator | 00:01:19.683 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-06 00:01:19.687768 | orchestrator | 00:01:19.683 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-06 00:01:19.687775 | orchestrator | 00:01:19.683 STDOUT terraform:  + mac = (known after apply) 2025-05-06 00:01:19.687789 | orchestrator | 00:01:19.683 STDOUT terraform:  + name = (known after apply) 2025-05-06 00:01:19.687801 | orchestrator | 00:01:19.683 STDOUT terraform:  + port = (known after apply) 2025-05-06 00:01:19.687812 | orchestrator | 00:01:19.683 STDOUT terraform:  + uuid = (known after apply) 2025-05-06 00:01:19.687823 | orchestrator | 00:01:19.683 STDOUT terraform:  } 2025-05-06 00:01:19.687834 | orchestrator | 00:01:19.683 STDOUT terraform:  } 2025-05-06 00:01:19.687845 | orchestrator | 00:01:19.683 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-06 00:01:19.687855 | orchestrator | 00:01:19.683 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-06 00:01:19.687866 | orchestrator | 00:01:19.683 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-06 00:01:19.687878 | orchestrator | 00:01:19.683 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-06 00:01:19.687889 | orchestrator | 00:01:19.683 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-06 00:01:19.687900 | orchestrator | 00:01:19.683 STDOUT terraform:  + all_tags = (known after apply) 2025-05-06 00:01:19.687910 | orchestrator | 00:01:19.683 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.687917 | orchestrator | 00:01:19.683 STDOUT terraform:  + config_drive = true 2025-05-06 00:01:19.687923 | orchestrator | 00:01:19.683 STDOUT terraform:  + created = (known after apply) 2025-05-06 00:01:19.687934 | orchestrator | 00:01:19.683 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-06 00:01:19.687940 | orchestrator | 00:01:19.683 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-06 00:01:19.687946 | orchestrator | 00:01:19.683 STDOUT terraform:  + force_delete = false 2025-05-06 00:01:19.687953 | orchestrator | 00:01:19.683 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.687959 | orchestrator | 00:01:19.683 STDOUT terraform:  + image_id = (known after apply) 2025-05-06 00:01:19.687965 | orchestrator | 00:01:19.683 STDOUT terraform:  + image_name = (known after apply) 2025-05-06 00:01:19.687971 | orchestrator | 00:01:19.683 STDOUT terraform:  + key_pair = "testbed" 2025-05-06 00:01:19.687978 | orchestrator | 00:01:19.683 STDOUT terraform:  + name = "testbed-node-0" 2025-05-06 00:01:19.687984 | orchestrator | 00:01:19.683 STDOUT terraform:  + power_state = "active" 2025-05-06 00:01:19.687990 | orchestrator | 00:01:19.683 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.687997 | orchestrator | 00:01:19.683 STDOUT terraform:  + security_groups = (known after apply) 2025-05-06 00:01:19.688003 | orchestrator | 00:01:19.683 STDOUT terraform:  + stop_before_destroy = false 2025-05-06 00:01:19.688009 | orchestrator | 00:01:19.683 STDOUT terraform:  + updated = (known after apply) 2025-05-06 00:01:19.688015 | orchestrator | 00:01:19.684 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-06 00:01:19.688022 | orchestrator | 00:01:19.684 STDOUT terraform:  + block_device { 2025-05-06 00:01:19.688028 | orchestrator | 00:01:19.684 STDOUT terraform:  + boot_index = 0 2025-05-06 00:01:19.688038 | orchestrator | 00:01:19.684 STDOUT terraform:  + delete_on_termination = false 2025-05-06 00:01:19.688044 | orchestrator | 00:01:19.684 STDOUT terraform:  + destination_type = "volume" 2025-05-06 00:01:19.688051 | orchestrator | 00:01:19.684 STDOUT terraform:  + multiattach = false 2025-05-06 00:01:19.688057 | orchestrator | 00:01:19.684 STDOUT terraform:  + source_type = "volume" 2025-05-06 00:01:19.688063 | orchestrator | 00:01:19.684 STDOUT terraform:  + uuid = (known after apply) 2025-05-06 00:01:19.688070 | orchestrator | 00:01:19.684 STDOUT terraform:  } 2025-05-06 00:01:19.688076 | orchestrator | 00:01:19.684 STDOUT terraform:  + network { 2025-05-06 00:01:19.688082 | orchestrator | 00:01:19.684 STDOUT terraform:  + access_network = false 2025-05-06 00:01:19.688093 | orchestrator | 00:01:19.684 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-06 00:01:19.688099 | orchestrator | 00:01:19.684 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-06 00:01:19.688106 | orchestrator | 00:01:19.684 STDOUT terraform:  + mac = (known after apply) 2025-05-06 00:01:19.688112 | orchestrator | 00:01:19.684 STDOUT terraform:  + name = (known after apply) 2025-05-06 00:01:19.688121 | orchestrator | 00:01:19.684 STDOUT terraform:  + port = (known after apply) 2025-05-06 00:01:19.688127 | orchestrator | 00:01:19.684 STDOUT terraform:  + uuid = (known after apply) 2025-05-06 00:01:19.688137 | orchestrator | 00:01:19.684 STDOUT terraform:  } 2025-05-06 00:01:19.688144 | orchestrator | 00:01:19.684 STDOUT terraform:  } 2025-05-06 00:01:19.688150 | orchestrator | 00:01:19.684 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-06 00:01:19.688157 | orchestrator | 00:01:19.684 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-06 00:01:19.688165 | orchestrator | 00:01:19.684 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-06 00:01:19.688171 | orchestrator | 00:01:19.684 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-06 00:01:19.688178 | orchestrator | 00:01:19.684 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-06 00:01:19.688184 | orchestrator | 00:01:19.684 STDOUT terraform:  + all_tags = (known after apply) 2025-05-06 00:01:19.688191 | orchestrator | 00:01:19.684 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.688197 | orchestrator | 00:01:19.684 STDOUT terraform:  + config_drive = true 2025-05-06 00:01:19.688204 | orchestrator | 00:01:19.684 STDOUT terraform:  + created = (known after apply) 2025-05-06 00:01:19.688210 | orchestrator | 00:01:19.684 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-06 00:01:19.688219 | orchestrator | 00:01:19.684 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-06 00:01:19.688225 | orchestrator | 00:01:19.684 STDOUT terraform:  + force_delete = false 2025-05-06 00:01:19.688231 | orchestrator | 00:01:19.684 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.688238 | orchestrator | 00:01:19.684 STDOUT terraform:  + image_id = (known after apply) 2025-05-06 00:01:19.688244 | orchestrator | 00:01:19.684 STDOUT terraform:  + image_name = (known after apply) 2025-05-06 00:01:19.688250 | orchestrator | 00:01:19.684 STDOUT terraform:  + key_pair = "testbed" 2025-05-06 00:01:19.688257 | orchestrator | 00:01:19.684 STDOUT terraform:  + name = "testbed-node-1" 2025-05-06 00:01:19.688263 | orchestrator | 00:01:19.685 STDOUT terraform:  + power_state = "active" 2025-05-06 00:01:19.688269 | orchestrator | 00:01:19.685 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.688275 | orchestrator | 00:01:19.685 STDOUT terraform:  + security_groups = (known after apply) 2025-05-06 00:01:19.688281 | orchestrator | 00:01:19.685 STDOUT terraform:  + stop_before_destroy = false 2025-05-06 00:01:19.688288 | orchestrator | 00:01:19.685 STDOUT terraform:  + updated = (known after apply) 2025-05-06 00:01:19.688294 | orchestrator | 00:01:19.685 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-06 00:01:19.688300 | orchestrator | 00:01:19.685 STDOUT terraform:  + block_device { 2025-05-06 00:01:19.688306 | orchestrator | 00:01:19.685 STDOUT terraform:  + boot_index = 0 2025-05-06 00:01:19.688313 | orchestrator | 00:01:19.685 STDOUT terraform:  + delete_on_termination = false 2025-05-06 00:01:19.688319 | orchestrator | 00:01:19.685 STDOUT terraform:  + destination_type = "volume" 2025-05-06 00:01:19.688325 | orchestrator | 00:01:19.685 STDOUT terraform:  + multiattach = false 2025-05-06 00:01:19.688335 | orchestrator | 00:01:19.685 STDOUT terraform:  + source_type = "volume" 2025-05-06 00:01:19.688344 | orchestrator | 00:01:19.685 STDOUT terraform:  + uuid = (known after apply) 2025-05-06 00:01:19.688351 | orchestrator | 00:01:19.685 STDOUT terraform:  } 2025-05-06 00:01:19.688357 | orchestrator | 00:01:19.685 STDOUT terraform:  + network { 2025-05-06 00:01:19.688364 | orchestrator | 00:01:19.685 STDOUT terraform:  + access_network = false 2025-05-06 00:01:19.688371 | orchestrator | 00:01:19.685 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-06 00:01:19.688377 | orchestrator | 00:01:19.685 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-06 00:01:19.688383 | orchestrator | 00:01:19.685 STDOUT terraform:  + mac = (known after apply) 2025-05-06 00:01:19.688389 | orchestrator | 00:01:19.685 STDOUT terraform:  + name = (known after apply) 2025-05-06 00:01:19.688395 | orchestrator | 00:01:19.685 STDOUT terraform:  + port = (known after apply) 2025-05-06 00:01:19.688402 | orchestrator | 00:01:19.685 STDOUT terraform:  + uuid = (known after apply) 2025-05-06 00:01:19.688408 | orchestrator | 00:01:19.685 STDOUT terraform:  } 2025-05-06 00:01:19.688414 | orchestrator | 00:01:19.685 STDOUT terraform:  } 2025-05-06 00:01:19.688453 | orchestrator | 00:01:19.685 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-06 00:01:19.688460 | orchestrator | 00:01:19.685 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-06 00:01:19.688466 | orchestrator | 00:01:19.685 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-06 00:01:19.688472 | orchestrator | 00:01:19.685 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-06 00:01:19.688481 | orchestrator | 00:01:19.685 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-06 00:01:19.688487 | orchestrator | 00:01:19.685 STDOUT terraform:  + all_tags = (known after apply) 2025-05-06 00:01:19.688493 | orchestrator | 00:01:19.685 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.688500 | orchestrator | 00:01:19.685 STDOUT terraform:  + config_drive = true 2025-05-06 00:01:19.688506 | orchestrator | 00:01:19.685 STDOUT terraform:  + created = (known after apply) 2025-05-06 00:01:19.688512 | orchestrator | 00:01:19.685 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-06 00:01:19.688518 | orchestrator | 00:01:19.685 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-06 00:01:19.688524 | orchestrator | 00:01:19.685 STDOUT terraform:  + force_delete = false 2025-05-06 00:01:19.688530 | orchestrator | 00:01:19.685 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.688536 | orchestrator | 00:01:19.686 STDOUT terraform:  + image_id = (known after apply) 2025-05-06 00:01:19.688543 | orchestrator | 00:01:19.686 STDOUT terraform:  + image_name = (known after apply) 2025-05-06 00:01:19.688549 | orchestrator | 00:01:19.686 STDOUT terraform:  + key_pair = "testbed" 2025-05-06 00:01:19.688555 | orchestrator | 00:01:19.686 STDOUT terraform:  + name = "testbed-node-2" 2025-05-06 00:01:19.688565 | orchestrator | 00:01:19.686 STDOUT terraform:  + power_state = "active" 2025-05-06 00:01:19.688571 | orchestrator | 00:01:19.686 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.688578 | orchestrator | 00:01:19.686 STDOUT terraform:  + security_groups = (known after apply) 2025-05-06 00:01:19.688584 | orchestrator | 00:01:19.686 STDOUT terraform:  + stop_before_destroy = false 2025-05-06 00:01:19.688590 | orchestrator | 00:01:19.686 STDOUT terraform:  + updated = (known after apply) 2025-05-06 00:01:19.688596 | orchestrator | 00:01:19.686 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-06 00:01:19.688603 | orchestrator | 00:01:19.686 STDOUT terraform:  + block_device { 2025-05-06 00:01:19.688609 | orchestrator | 00:01:19.686 STDOUT terraform:  + boot_index = 0 2025-05-06 00:01:19.688615 | orchestrator | 00:01:19.686 STDOUT terraform:  + delete_on_termination = false 2025-05-06 00:01:19.688621 | orchestrator | 00:01:19.686 STDOUT terraform:  + destination_type = "volume" 2025-05-06 00:01:19.688631 | orchestrator | 00:01:19.686 STDOUT terraform:  + multiattach = false 2025-05-06 00:01:19.688637 | orchestrator | 00:01:19.686 STDOUT terraform:  + source_type = "volume" 2025-05-06 00:01:19.688643 | orchestrator | 00:01:19.686 STDOUT terraform:  + uuid = (known after apply) 2025-05-06 00:01:19.688650 | orchestrator | 00:01:19.686 STDOUT terraform:  } 2025-05-06 00:01:19.688656 | orchestrator | 00:01:19.686 STDOUT terraform:  + network { 2025-05-06 00:01:19.688662 | orchestrator | 00:01:19.686 STDOUT terraform:  + access_network = false 2025-05-06 00:01:19.688668 | orchestrator | 00:01:19.686 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-06 00:01:19.688674 | orchestrator | 00:01:19.686 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-06 00:01:19.688681 | orchestrator | 00:01:19.686 STDOUT terraform:  + mac = (known after apply) 2025-05-06 00:01:19.688687 | orchestrator | 00:01:19.686 STDOUT terraform:  + name = (known after apply) 2025-05-06 00:01:19.688693 | orchestrator | 00:01:19.686 STDOUT terraform:  + port = (known after apply) 2025-05-06 00:01:19.688699 | orchestrator | 00:01:19.686 STDOUT terraform:  + uuid = (known after apply) 2025-05-06 00:01:19.688705 | orchestrator | 00:01:19.686 STDOUT terraform:  } 2025-05-06 00:01:19.688712 | orchestrator | 00:01:19.686 STDOUT terraform:  } 2025-05-06 00:01:19.688718 | orchestrator | 00:01:19.686 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-06 00:01:19.688724 | orchestrator | 00:01:19.686 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-06 00:01:19.688731 | orchestrator | 00:01:19.686 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-06 00:01:19.688737 | orchestrator | 00:01:19.686 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-06 00:01:19.688743 | orchestrator | 00:01:19.686 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-06 00:01:19.688749 | orchestrator | 00:01:19.686 STDOUT terraform:  + all_tags = (known after apply) 2025-05-06 00:01:19.688766 | orchestrator | 00:01:19.686 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.688772 | orchestrator | 00:01:19.686 STDOUT terraform:  + config_drive = true 2025-05-06 00:01:19.688778 | orchestrator | 00:01:19.687 STDOUT terraform:  + created = (known after apply) 2025-05-06 00:01:19.688785 | orchestrator | 00:01:19.687 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-06 00:01:19.688793 | orchestrator | 00:01:19.687 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-06 00:01:19.688800 | orchestrator | 00:01:19.687 STDOUT terraform:  + force_delete = false 2025-05-06 00:01:19.688806 | orchestrator | 00:01:19.687 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.688815 | orchestrator | 00:01:19.687 STDOUT terraform:  + image_id = (known after apply) 2025-05-06 00:01:19.688821 | orchestrator | 00:01:19.687 STDOUT terraform:  + image_name = (known after apply) 2025-05-06 00:01:19.688827 | orchestrator | 00:01:19.687 STDOUT terraform:  + key_pair = "testbed" 2025-05-06 00:01:19.688834 | orchestrator | 00:01:19.687 STDOUT terraform:  + name = "testbed-node-3" 2025-05-06 00:01:19.688840 | orchestrator | 00:01:19.687 STDOUT terraform:  + power_state = "active" 2025-05-06 00:01:19.688846 | orchestrator | 00:01:19.687 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.688852 | orchestrator | 00:01:19.687 STDOUT terraform:  + security_groups = (known after apply) 2025-05-06 00:01:19.688858 | orchestrator | 00:01:19.687 STDOUT terraform:  + stop_before_destroy = false 2025-05-06 00:01:19.688865 | orchestrator | 00:01:19.687 STDOUT terraform:  + updated = (known after apply) 2025-05-06 00:01:19.688871 | orchestrator | 00:01:19.687 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-06 00:01:19.688877 | orchestrator | 00:01:19.687 STDOUT terraform:  + block_device { 2025-05-06 00:01:19.688888 | orchestrator | 00:01:19.687 STDOUT terraform:  + boot_index = 0 2025-05-06 00:01:19.688894 | orchestrator | 00:01:19.687 STDOUT terraform:  + delete_on_termination = false 2025-05-06 00:01:19.688900 | orchestrator | 00:01:19.687 STDOUT terraform:  + destination_type = "volume" 2025-05-06 00:01:19.688906 | orchestrator | 00:01:19.687 STDOUT terraform:  + multiattach = false 2025-05-06 00:01:19.688912 | orchestrator | 00:01:19.687 STDOUT terraform:  + source_type = "volume" 2025-05-06 00:01:19.688919 | orchestrator | 00:01:19.687 STDOUT terraform:  + uuid = (known after apply) 2025-05-06 00:01:19.688924 | orchestrator | 00:01:19.687 STDOUT terraform:  } 2025-05-06 00:01:19.688930 | orchestrator | 00:01:19.687 STDOUT terraform:  + network { 2025-05-06 00:01:19.688936 | orchestrator | 00:01:19.687 STDOUT terraform:  + access_network = false 2025-05-06 00:01:19.688942 | orchestrator | 00:01:19.687 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-06 00:01:19.688947 | orchestrator | 00:01:19.687 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-06 00:01:19.688953 | orchestrator | 00:01:19.687 STDOUT terraform:  + mac = (known after apply) 2025-05-06 00:01:19.688961 | orchestrator | 00:01:19.687 STDOUT terraform:  + name = (known after apply) 2025-05-06 00:01:19.688967 | orchestrator | 00:01:19.687 STDOUT terraform:  + port = (known after apply) 2025-05-06 00:01:19.688974 | orchestrator | 00:01:19.687 STDOUT terraform:  + uuid = (known after apply) 2025-05-06 00:01:19.688979 | orchestrator | 00:01:19.687 STDOUT terraform:  } 2025-05-06 00:01:19.688985 | orchestrator | 00:01:19.687 STDOUT terraform:  } 2025-05-06 00:01:19.688991 | orchestrator | 00:01:19.687 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-06 00:01:19.688996 | orchestrator | 00:01:19.687 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-06 00:01:19.689002 | orchestrator | 00:01:19.687 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-06 00:01:19.689007 | orchestrator | 00:01:19.687 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-06 00:01:19.689013 | orchestrator | 00:01:19.687 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-06 00:01:19.689019 | orchestrator | 00:01:19.687 STDOUT terraform:  + all_tags = (known after apply) 2025-05-06 00:01:19.689024 | orchestrator | 00:01:19.688 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.689030 | orchestrator | 00:01:19.688 STDOUT terraform:  + config_drive = true 2025-05-06 00:01:19.689035 | orchestrator | 00:01:19.688 STDOUT terraform:  + created = (known after apply) 2025-05-06 00:01:19.689041 | orchestrator | 00:01:19.688 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-06 00:01:19.689046 | orchestrator | 00:01:19.688 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-06 00:01:19.689052 | orchestrator | 00:01:19.688 STDOUT terraform:  + force_delete = false 2025-05-06 00:01:19.689057 | orchestrator | 00:01:19.688 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.689063 | orchestrator | 00:01:19.688 STDOUT terraform:  + image_id = (known after apply) 2025-05-06 00:01:19.689069 | orchestrator | 00:01:19.688 STDOUT terraform:  + image_name = (known after apply) 2025-05-06 00:01:19.689074 | orchestrator | 00:01:19.688 STDOUT terraform:  + key_pair = "testbed" 2025-05-06 00:01:19.689080 | orchestrator | 00:01:19.688 STDOUT terraform:  + name = "testbed-node-4" 2025-05-06 00:01:19.689085 | orchestrator | 00:01:19.688 STDOUT terraform:  + power_state = "active" 2025-05-06 00:01:19.689091 | orchestrator | 00:01:19.688 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.689096 | orchestrator | 00:01:19.688 STDOUT terraform:  + security_groups = (known after apply) 2025-05-06 00:01:19.689102 | orchestrator | 00:01:19.688 STDOUT terraform:  + stop_before_destroy = false 2025-05-06 00:01:19.689111 | orchestrator | 00:01:19.688 STDOUT terraform:  + updated = (known after apply) 2025-05-06 00:01:19.689226 | orchestrator | 00:01:19.688 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-06 00:01:19.689235 | orchestrator | 00:01:19.688 STDOUT terraform:  + block_device { 2025-05-06 00:01:19.689240 | orchestrator | 00:01:19.688 STDOUT terraform:  + boot_index = 0 2025-05-06 00:01:19.689252 | orchestrator | 00:01:19.688 STDOUT terraform:  + delete_on_termination = false 2025-05-06 00:01:19.689258 | orchestrator | 00:01:19.688 STDOUT terraform:  + destination_type = "volume" 2025-05-06 00:01:19.689263 | orchestrator | 00:01:19.688 STDOUT terraform:  + multiattach = false 2025-05-06 00:01:19.689270 | orchestrator | 00:01:19.688 STDOUT terraform:  + source_type = "volume" 2025-05-06 00:01:19.689280 | orchestrator | 00:01:19.688 STDOUT terraform:  + uuid = (known after apply) 2025-05-06 00:01:19.689285 | orchestrator | 00:01:19.688 STDOUT terraform:  } 2025-05-06 00:01:19.689291 | orchestrator | 00:01:19.688 STDOUT terraform:  + network { 2025-05-06 00:01:19.689297 | orchestrator | 00:01:19.688 STDOUT terraform:  + access_network = false 2025-05-06 00:01:19.689305 | orchestrator | 00:01:19.688 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-06 00:01:19.689311 | orchestrator | 00:01:19.688 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-06 00:01:19.689317 | orchestrator | 00:01:19.688 STDOUT terraform:  + mac = (known after apply) 2025-05-06 00:01:19.689323 | orchestrator | 00:01:19.688 STDOUT terraform:  + name = (known after apply) 2025-05-06 00:01:19.689328 | orchestrator | 00:01:19.688 STDOUT terraform:  + port = (known after apply) 2025-05-06 00:01:19.689334 | orchestrator | 00:01:19.688 STDOUT terraform:  + uuid = (known after apply) 2025-05-06 00:01:19.689339 | orchestrator | 00:01:19.688 STDOUT terraform:  } 2025-05-06 00:01:19.689345 | orchestrator | 00:01:19.688 STDOUT terraform:  } 2025-05-06 00:01:19.689350 | orchestrator | 00:01:19.688 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-06 00:01:19.689356 | orchestrator | 00:01:19.688 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-06 00:01:19.689362 | orchestrator | 00:01:19.689 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-06 00:01:19.689368 | orchestrator | 00:01:19.689 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-06 00:01:19.689373 | orchestrator | 00:01:19.689 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-06 00:01:19.689382 | orchestrator | 00:01:19.689 STDOUT terraform:  + all_tags = (known after apply) 2025-05-06 00:01:19.689388 | orchestrator | 00:01:19.689 STDOUT terraform:  + availability_zone = "nova" 2025-05-06 00:01:19.689393 | orchestrator | 00:01:19.689 STDOUT terraform:  + config_drive = true 2025-05-06 00:01:19.689399 | orchestrator | 00:01:19.689 STDOUT terraform:  + created = (known after apply) 2025-05-06 00:01:19.689404 | orchestrator | 00:01:19.689 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-06 00:01:19.689413 | orchestrator | 00:01:19.689 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-06 00:01:19.689433 | orchestrator | 00:01:19.689 STDOUT terraform:  + force_delete = false 2025-05-06 00:01:19.689439 | orchestrator | 00:01:19.689 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.689444 | orchestrator | 00:01:19.689 STDOUT terraform:  + image_id = (known after apply) 2025-05-06 00:01:19.689456 | orchestrator | 00:01:19.689 STDOUT terraform:  + image_name = (known after apply) 2025-05-06 00:01:19.689462 | orchestrator | 00:01:19.689 STDOUT terraform:  + key_pair = "testbed" 2025-05-06 00:01:19.689469 | orchestrator | 00:01:19.689 STDOUT terraform:  + name = "testbed-node-5" 2025-05-06 00:01:19.689498 | orchestrator | 00:01:19.689 STDOUT terraform:  + power_state = "active" 2025-05-06 00:01:19.689531 | orchestrator | 00:01:19.689 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.689586 | orchestrator | 00:01:19.689 STDOUT terraform:  + security_groups = (known after apply) 2025-05-06 00:01:19.689620 | orchestrator | 00:01:19.689 STDOUT terraform:  + stop_before_destroy = false 2025-05-06 00:01:19.689628 | orchestrator | 00:01:19.689 STDOUT terraform:  + updated = (known after apply) 2025-05-06 00:01:19.689680 | orchestrator | 00:01:19.689 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-06 00:01:19.689711 | orchestrator | 00:01:19.689 STDOUT terraform:  + block_device { 2025-05-06 00:01:19.689719 | orchestrator | 00:01:19.689 STDOUT terraform:  + boot_index = 0 2025-05-06 00:01:19.689747 | orchestrator | 00:01:19.689 STDOUT terraform:  + delete_on_termination = false 2025-05-06 00:01:19.689772 | orchestrator | 00:01:19.689 STDOUT terraform:  + destination_type = "volume" 2025-05-06 00:01:19.689800 | orchestrator | 00:01:19.689 STDOUT terraform:  + multiattach = false 2025-05-06 00:01:19.689831 | orchestrator | 00:01:19.689 STDOUT terraform:  + source_type = "volume" 2025-05-06 00:01:19.689870 | orchestrator | 00:01:19.689 STDOUT terraform:  + uuid = (known after apply) 2025-05-06 00:01:19.689878 | orchestrator | 00:01:19.689 STDOUT terraform:  } 2025-05-06 00:01:19.689885 | orchestrator | 00:01:19.689 STDOUT terraform:  + network { 2025-05-06 00:01:19.689914 | orchestrator | 00:01:19.689 STDOUT terraform:  + access_network = false 2025-05-06 00:01:19.689944 | orchestrator | 00:01:19.689 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-06 00:01:19.689974 | orchestrator | 00:01:19.689 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-06 00:01:19.690007 | orchestrator | 00:01:19.689 STDOUT terraform:  + mac = (known after apply) 2025-05-06 00:01:19.690053 | orchestrator | 00:01:19.689 STDOUT terraform:  + name = (known after apply) 2025-05-06 00:01:19.690087 | orchestrator | 00:01:19.690 STDOUT terraform:  + port = (known after apply) 2025-05-06 00:01:19.690118 | orchestrator | 00:01:19.690 STDOUT terraform:  + uuid = (known after apply) 2025-05-06 00:01:19.690125 | orchestrator | 00:01:19.690 STDOUT terraform:  } 2025-05-06 00:01:19.690132 | orchestrator | 00:01:19.690 STDOUT terraform:  } 2025-05-06 00:01:19.690177 | orchestrator | 00:01:19.690 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-06 00:01:19.690207 | orchestrator | 00:01:19.690 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-06 00:01:19.690235 | orchestrator | 00:01:19.690 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-06 00:01:19.690266 | orchestrator | 00:01:19.690 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.690278 | orchestrator | 00:01:19.690 STDOUT terraform:  + name = "testbed" 2025-05-06 00:01:19.690309 | orchestrator | 00:01:19.690 STDOUT terraform:  + private_key = (sensitive value) 2025-05-06 00:01:19.690337 | orchestrator | 00:01:19.690 STDOUT terraform:  + public_key = (known after apply) 2025-05-06 00:01:19.690367 | orchestrator | 00:01:19.690 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.690395 | orchestrator | 00:01:19.690 STDOUT terraform:  + user_id = (known after apply) 2025-05-06 00:01:19.690403 | orchestrator | 00:01:19.690 STDOUT terraform:  } 2025-05-06 00:01:19.690478 | orchestrator | 00:01:19.690 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-06 00:01:19.690518 | orchestrator | 00:01:19.690 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-06 00:01:19.690547 | orchestrator | 00:01:19.690 STDOUT terraform:  + device = (known after apply) 2025-05-06 00:01:19.690577 | orchestrator | 00:01:19.690 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.690606 | orchestrator | 00:01:19.690 STDOUT terraform:  + instance_id = (known after apply) 2025-05-06 00:01:19.690635 | orchestrator | 00:01:19.690 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.690664 | orchestrator | 00:01:19.690 STDOUT terraform:  + volume_id = (known after apply) 2025-05-06 00:01:19.690671 | orchestrator | 00:01:19.690 STDOUT terraform:  } 2025-05-06 00:01:19.690722 | orchestrator | 00:01:19.690 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-06 00:01:19.690771 | orchestrator | 00:01:19.690 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-06 00:01:19.690800 | orchestrator | 00:01:19.690 STDOUT terraform:  + device = (known after apply) 2025-05-06 00:01:19.690829 | orchestrator | 00:01:19.690 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.690858 | orchestrator | 00:01:19.690 STDOUT terraform:  + instance_id = (known after apply) 2025-05-06 00:01:19.690887 | orchestrator | 00:01:19.690 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.690920 | orchestrator | 00:01:19.690 STDOUT terraform:  + volume_id = (known after apply) 2025-05-06 00:01:19.690928 | orchestrator | 00:01:19.690 STDOUT terraform:  } 2025-05-06 00:01:19.690977 | orchestrator | 00:01:19.690 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-06 00:01:19.691027 | orchestrator | 00:01:19.690 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-06 00:01:19.691056 | orchestrator | 00:01:19.691 STDOUT terraform:  + device = (known after apply) 2025-05-06 00:01:19.691084 | orchestrator | 00:01:19.691 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.691113 | orchestrator | 00:01:19.691 STDOUT terraform:  + instance_id = (known after apply) 2025-05-06 00:01:19.691143 | orchestrator | 00:01:19.691 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.691172 | orchestrator | 00:01:19.691 STDOUT terraform:  + volume_id = (known after apply) 2025-05-06 00:01:19.691183 | orchestrator | 00:01:19.691 STDOUT terraform:  } 2025-05-06 00:01:19.691230 | orchestrator | 00:01:19.691 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-06 00:01:19.691279 | orchestrator | 00:01:19.691 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-06 00:01:19.691307 | orchestrator | 00:01:19.691 STDOUT terraform:  + device = (known after apply) 2025-05-06 00:01:19.691335 | orchestrator | 00:01:19.691 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.691364 | orchestrator | 00:01:19.691 STDOUT terraform:  + instance_id = (known after apply) 2025-05-06 00:01:19.691393 | orchestrator | 00:01:19.691 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.691432 | orchestrator | 00:01:19.691 STDOUT terraform:  + volume_id = (known after apply) 2025-05-06 00:01:19.691488 | orchestrator | 00:01:19.691 STDOUT terraform:  } 2025-05-06 00:01:19.691496 | orchestrator | 00:01:19.691 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-06 00:01:19.691537 | orchestrator | 00:01:19.691 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-06 00:01:19.691566 | orchestrator | 00:01:19.691 STDOUT terraform:  + device = (known after apply) 2025-05-06 00:01:19.691596 | orchestrator | 00:01:19.691 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.691623 | orchestrator | 00:01:19.691 STDOUT terraform:  + instance_id = (known after apply) 2025-05-06 00:01:19.691652 | orchestrator | 00:01:19.691 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.691681 | orchestrator | 00:01:19.691 STDOUT terraform:  + volume_id = (known after apply) 2025-05-06 00:01:19.691688 | orchestrator | 00:01:19.691 STDOUT terraform:  } 2025-05-06 00:01:19.691741 | orchestrator | 00:01:19.691 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-06 00:01:19.691791 | orchestrator | 00:01:19.691 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-06 00:01:19.691819 | orchestrator | 00:01:19.691 STDOUT terraform:  + device = (known after apply) 2025-05-06 00:01:19.691853 | orchestrator | 00:01:19.691 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.691887 | orchestrator | 00:01:19.691 STDOUT terraform:  + instance_id = (known after apply) 2025-05-06 00:01:19.691895 | orchestrator | 00:01:19.691 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.691930 | orchestrator | 00:01:19.691 STDOUT terraform:  + volume_id = (known after apply) 2025-05-06 00:01:19.691938 | orchestrator | 00:01:19.691 STDOUT terraform:  } 2025-05-06 00:01:19.691990 | orchestrator | 00:01:19.691 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-06 00:01:19.692038 | orchestrator | 00:01:19.691 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-06 00:01:19.692067 | orchestrator | 00:01:19.692 STDOUT terraform:  + device = (known after apply) 2025-05-06 00:01:19.692095 | orchestrator | 00:01:19.692 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.692123 | orchestrator | 00:01:19.692 STDOUT terraform:  + instance_id = (known after apply) 2025-05-06 00:01:19.692153 | orchestrator | 00:01:19.692 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.692183 | orchestrator | 00:01:19.692 STDOUT terraform:  + volume_id = (known after apply) 2025-05-06 00:01:19.692190 | orchestrator | 00:01:19.692 STDOUT terraform:  } 2025-05-06 00:01:19.692241 | orchestrator | 00:01:19.692 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-06 00:01:19.692289 | orchestrator | 00:01:19.692 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-06 00:01:19.692319 | orchestrator | 00:01:19.692 STDOUT terraform:  + device = (known after apply) 2025-05-06 00:01:19.692349 | orchestrator | 00:01:19.692 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.692378 | orchestrator | 00:01:19.692 STDOUT terraform:  + instance_id = (known after apply) 2025-05-06 00:01:19.692413 | orchestrator | 00:01:19.692 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.692459 | orchestrator | 00:01:19.692 STDOUT terraform:  + volume_id = (known after apply) 2025-05-06 00:01:19.692466 | orchestrator | 00:01:19.692 STDOUT terraform:  } 2025-05-06 00:01:19.692519 | orchestrator | 00:01:19.692 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-06 00:01:19.692567 | orchestrator | 00:01:19.692 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-06 00:01:19.692595 | orchestrator | 00:01:19.692 STDOUT terraform:  + device = (known after apply) 2025-05-06 00:01:19.692625 | orchestrator | 00:01:19.692 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.692653 | orchestrator | 00:01:19.692 STDOUT terraform:  + instance_id = (known after apply) 2025-05-06 00:01:19.692681 | orchestrator | 00:01:19.692 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.692710 | orchestrator | 00:01:19.692 STDOUT terraform:  + volume_id = (known after apply) 2025-05-06 00:01:19.692717 | orchestrator | 00:01:19.692 STDOUT terraform:  } 2025-05-06 00:01:19.692768 | orchestrator | 00:01:19.692 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-05-06 00:01:19.692817 | orchestrator | 00:01:19.692 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-06 00:01:19.692846 | orchestrator | 00:01:19.692 STDOUT terraform:  + device = (known after apply) 2025-05-06 00:01:19.692874 | orchestrator | 00:01:19.692 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.692904 | orchestrator | 00:01:19.692 STDOUT terraform:  + instance_id = (known after apply) 2025-05-06 00:01:19.692934 | orchestrator | 00:01:19.692 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.692961 | orchestrator | 00:01:19.692 STDOUT terraform:  + volume_id = (known after apply) 2025-05-06 00:01:19.692968 | orchestrator | 00:01:19.692 STDOUT terraform:  } 2025-05-06 00:01:19.693021 | orchestrator | 00:01:19.692 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-05-06 00:01:19.693070 | orchestrator | 00:01:19.693 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-06 00:01:19.693099 | orchestrator | 00:01:19.693 STDOUT terraform:  + device = (known after apply) 2025-05-06 00:01:19.693128 | orchestrator | 00:01:19.693 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.693157 | orchestrator | 00:01:19.693 STDOUT terraform:  + instance_id = (known after apply) 2025-05-06 00:01:19.693185 | orchestrator | 00:01:19.693 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.693213 | orchestrator | 00:01:19.693 STDOUT terraform:  + volume_id = (known after apply) 2025-05-06 00:01:19.693220 | orchestrator | 00:01:19.693 STDOUT terraform:  } 2025-05-06 00:01:19.693272 | orchestrator | 00:01:19.693 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-05-06 00:01:19.693319 | orchestrator | 00:01:19.693 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-06 00:01:19.693349 | orchestrator | 00:01:19.693 STDOUT terraform:  + device = (known after apply) 2025-05-06 00:01:19.693379 | orchestrator | 00:01:19.693 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.693406 | orchestrator | 00:01:19.693 STDOUT terraform:  + instance_id = (known after apply) 2025-05-06 00:01:19.693449 | orchestrator | 00:01:19.693 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.693470 | orchestrator | 00:01:19.693 STDOUT terraform:  + volume_id = (known after apply) 2025-05-06 00:01:19.693477 | orchestrator | 00:01:19.693 STDOUT terraform:  } 2025-05-06 00:01:19.693530 | orchestrator | 00:01:19.693 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-05-06 00:01:19.693576 | orchestrator | 00:01:19.693 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-06 00:01:19.693605 | orchestrator | 00:01:19.693 STDOUT terraform:  + device = (known after apply) 2025-05-06 00:01:19.693634 | orchestrator | 00:01:19.693 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.693662 | orchestrator | 00:01:19.693 STDOUT terraform:  + instance_id = (known after apply) 2025-05-06 00:01:19.693691 | orchestrator | 00:01:19.693 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.693719 | orchestrator | 00:01:19.693 STDOUT terraform:  + volume_id = (known after apply) 2025-05-06 00:01:19.693726 | orchestrator | 00:01:19.693 STDOUT terraform:  } 2025-05-06 00:01:19.693782 | orchestrator | 00:01:19.693 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-05-06 00:01:19.693830 | orchestrator | 00:01:19.693 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-06 00:01:19.693859 | orchestrator | 00:01:19.693 STDOUT terraform:  + device = (known after apply) 2025-05-06 00:01:19.693891 | orchestrator | 00:01:19.693 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.693916 | orchestrator | 00:01:19.693 STDOUT terraform:  + instance_id = (known after apply) 2025-05-06 00:01:19.693946 | orchestrator | 00:01:19.693 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.693974 | orchestrator | 00:01:19.693 STDOUT terraform:  + volume_id = (known after apply) 2025-05-06 00:01:19.693984 | orchestrator | 00:01:19.693 STDOUT terraform:  } 2025-05-06 00:01:19.694050 | orchestrator | 00:01:19.693 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-05-06 00:01:19.694098 | orchestrator | 00:01:19.694 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-06 00:01:19.694126 | orchestrator | 00:01:19.694 STDOUT terraform:  + device = (known after apply) 2025-05-06 00:01:19.694155 | orchestrator | 00:01:19.694 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.694186 | orchestrator | 00:01:19.694 STDOUT terraform:  + instance_id = (known after apply) 2025-05-06 00:01:19.694214 | orchestrator | 00:01:19.694 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.694243 | orchestrator | 00:01:19.694 STDOUT terraform:  + volume_id = (known after apply) 2025-05-06 00:01:19.694253 | orchestrator | 00:01:19.694 STDOUT terraform:  } 2025-05-06 00:01:19.694302 | orchestrator | 00:01:19.694 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-05-06 00:01:19.694349 | orchestrator | 00:01:19.694 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-06 00:01:19.694378 | orchestrator | 00:01:19.694 STDOUT terraform:  + device = (known after apply) 2025-05-06 00:01:19.694407 | orchestrator | 00:01:19.694 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.694461 | orchestrator | 00:01:19.694 STDOUT terraform:  + instance_id = (known after apply) 2025-05-06 00:01:19.694490 | orchestrator | 00:01:19.694 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.694518 | orchestrator | 00:01:19.694 STDOUT terraform:  + volume_id = (known after apply) 2025-05-06 00:01:19.694525 | orchestrator | 00:01:19.694 STDOUT terraform:  } 2025-05-06 00:01:19.694577 | orchestrator | 00:01:19.694 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-05-06 00:01:19.694625 | orchestrator | 00:01:19.694 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-06 00:01:19.694655 | orchestrator | 00:01:19.694 STDOUT terraform:  + device = (known after apply) 2025-05-06 00:01:19.694685 | orchestrator | 00:01:19.694 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.694712 | orchestrator | 00:01:19.694 STDOUT terraform:  + instance_id = (known after apply) 2025-05-06 00:01:19.694741 | orchestrator | 00:01:19.694 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.694769 | orchestrator | 00:01:19.694 STDOUT terraform:  + volume_id = (known after apply) 2025-05-06 00:01:19.694776 | orchestrator | 00:01:19.694 STDOUT terraform:  } 2025-05-06 00:01:19.694828 | orchestrator | 00:01:19.694 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-05-06 00:01:19.694877 | orchestrator | 00:01:19.694 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-06 00:01:19.694907 | orchestrator | 00:01:19.694 STDOUT terraform:  + device = (known after apply) 2025-05-06 00:01:19.694936 | orchestrator | 00:01:19.694 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.694964 | orchestrator | 00:01:19.694 STDOUT terraform:  + instance_id = (known after apply) 2025-05-06 00:01:19.694994 | orchestrator | 00:01:19.694 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.695022 | orchestrator | 00:01:19.694 STDOUT terraform:  + volume_id = (known after apply) 2025-05-06 00:01:19.695029 | orchestrator | 00:01:19.695 STDOUT terraform:  } 2025-05-06 00:01:19.695088 | orchestrator | 00:01:19.695 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-06 00:01:19.695147 | orchestrator | 00:01:19.695 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-06 00:01:19.695177 | orchestrator | 00:01:19.695 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-06 00:01:19.695205 | orchestrator | 00:01:19.695 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-06 00:01:19.695234 | orchestrator | 00:01:19.695 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.695262 | orchestrator | 00:01:19.695 STDOUT terraform:  + port_id = (known after apply) 2025-05-06 00:01:19.695293 | orchestrator | 00:01:19.695 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.695300 | orchestrator | 00:01:19.695 STDOUT terraform:  } 2025-05-06 00:01:19.695346 | orchestrator | 00:01:19.695 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-06 00:01:19.695393 | orchestrator | 00:01:19.695 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-06 00:01:19.695428 | orchestrator | 00:01:19.695 STDOUT terraform:  + address = (known after apply) 2025-05-06 00:01:19.695455 | orchestrator | 00:01:19.695 STDOUT terraform:  + all_tags = (known after apply) 2025-05-06 00:01:19.695480 | orchestrator | 00:01:19.695 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-06 00:01:19.695507 | orchestrator | 00:01:19.695 STDOUT terraform:  + dns_name = (known after apply) 2025-05-06 00:01:19.695532 | orchestrator | 00:01:19.695 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-06 00:01:19.695557 | orchestrator | 00:01:19.695 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.695575 | orchestrator | 00:01:19.695 STDOUT terraform:  + pool = "public" 2025-05-06 00:01:19.695599 | orchestrator | 00:01:19.695 STDOUT terraform:  + port_id = (known after apply) 2025-05-06 00:01:19.695624 | orchestrator | 00:01:19.695 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.695649 | orchestrator | 00:01:19.695 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-06 00:01:19.695674 | orchestrator | 00:01:19.695 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.695682 | orchestrator | 00:01:19.695 STDOUT terraform:  } 2025-05-06 00:01:19.695729 | orchestrator | 00:01:19.695 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-06 00:01:19.695774 | orchestrator | 00:01:19.695 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-06 00:01:19.695810 | orchestrator | 00:01:19.695 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-06 00:01:19.695846 | orchestrator | 00:01:19.695 STDOUT terraform:  + all_tags = (known after apply) 2025-05-06 00:01:19.695869 | orchestrator | 00:01:19.695 STDOUT terraform:  + availability_zone_hints = [ 2025-05-06 00:01:19.695886 | orchestrator | 00:01:19.695 STDOUT terraform:  + "nova", 2025-05-06 00:01:19.695893 | orchestrator | 00:01:19.695 STDOUT terraform:  ] 2025-05-06 00:01:19.695924 | orchestrator | 00:01:19.695 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-06 00:01:19.695960 | orchestrator | 00:01:19.695 STDOUT terraform:  + external = (known after apply) 2025-05-06 00:01:19.695998 | orchestrator | 00:01:19.695 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.696035 | orchestrator | 00:01:19.695 STDOUT terraform:  + mtu = (known after apply) 2025-05-06 00:01:19.696074 | orchestrator | 00:01:19.696 STDOUT terraform:  + name = "net-testbed-management" 2025-05-06 00:01:19.696111 | orchestrator | 00:01:19.696 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-06 00:01:19.696148 | orchestrator | 00:01:19.696 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-06 00:01:19.696191 | orchestrator | 00:01:19.696 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.696222 | orchestrator | 00:01:19.696 STDOUT terraform:  + shared = (known after apply) 2025-05-06 00:01:19.696258 | orchestrator | 00:01:19.696 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.696294 | orchestrator | 00:01:19.696 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-06 00:01:19.696318 | orchestrator | 00:01:19.696 STDOUT terraform:  + segments (known after apply) 2025-05-06 00:01:19.696325 | orchestrator | 00:01:19.696 STDOUT terraform:  } 2025-05-06 00:01:19.696375 | orchestrator | 00:01:19.696 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-06 00:01:19.696443 | orchestrator | 00:01:19.696 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-06 00:01:19.696480 | orchestrator | 00:01:19.696 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-06 00:01:19.696518 | orchestrator | 00:01:19.696 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-06 00:01:19.696553 | orchestrator | 00:01:19.696 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-06 00:01:19.696588 | orchestrator | 00:01:19.696 STDOUT terraform:  + all_tags = (known after apply) 2025-05-06 00:01:19.696624 | orchestrator | 00:01:19.696 STDOUT terraform:  + device_id = (known after apply) 2025-05-06 00:01:19.696661 | orchestrator | 00:01:19.696 STDOUT terraform:  + device_owner = (known after apply) 2025-05-06 00:01:19.696697 | orchestrator | 00:01:19.696 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-06 00:01:19.696734 | orchestrator | 00:01:19.696 STDOUT terraform:  + dns_name = (known after apply) 2025-05-06 00:01:19.696771 | orchestrator | 00:01:19.696 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.696808 | orchestrator | 00:01:19.696 STDOUT terraform:  + mac_address = (known after apply) 2025-05-06 00:01:19.696847 | orchestrator | 00:01:19.696 STDOUT terraform:  + network_id = (known after apply) 2025-05-06 00:01:19.696879 | orchestrator | 00:01:19.696 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-06 00:01:19.696915 | orchestrator | 00:01:19.696 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-06 00:01:19.696952 | orchestrator | 00:01:19.696 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.696989 | orchestrator | 00:01:19.696 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-06 00:01:19.697025 | orchestrator | 00:01:19.696 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.697044 | orchestrator | 00:01:19.697 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.697072 | orchestrator | 00:01:19.697 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-06 00:01:19.697079 | orchestrator | 00:01:19.697 STDOUT terraform:  } 2025-05-06 00:01:19.697103 | orchestrator | 00:01:19.697 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.697133 | orchestrator | 00:01:19.697 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-06 00:01:19.697141 | orchestrator | 00:01:19.697 STDOUT terraform:  } 2025-05-06 00:01:19.697167 | orchestrator | 00:01:19.697 STDOUT terraform:  + binding (known after apply) 2025-05-06 00:01:19.697174 | orchestrator | 00:01:19.697 STDOUT terraform:  + fixed_ip { 2025-05-06 00:01:19.697203 | orchestrator | 00:01:19.697 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-06 00:01:19.697232 | orchestrator | 00:01:19.697 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-06 00:01:19.697239 | orchestrator | 00:01:19.697 STDOUT terraform:  } 2025-05-06 00:01:19.697245 | orchestrator | 00:01:19.697 STDOUT terraform:  } 2025-05-06 00:01:19.697297 | orchestrator | 00:01:19.697 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-06 00:01:19.697343 | orchestrator | 00:01:19.697 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-06 00:01:19.697380 | orchestrator | 00:01:19.697 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-06 00:01:19.697415 | orchestrator | 00:01:19.697 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-06 00:01:19.697468 | orchestrator | 00:01:19.697 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-06 00:01:19.697505 | orchestrator | 00:01:19.697 STDOUT terraform:  + all_tags = (known after apply) 2025-05-06 00:01:19.697541 | orchestrator | 00:01:19.697 STDOUT terraform:  + device_id = (known after apply) 2025-05-06 00:01:19.697578 | orchestrator | 00:01:19.697 STDOUT terraform:  + device_owner = (known after apply) 2025-05-06 00:01:19.697619 | orchestrator | 00:01:19.697 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-06 00:01:19.697655 | orchestrator | 00:01:19.697 STDOUT terraform:  + dns_name = (known after apply) 2025-05-06 00:01:19.697691 | orchestrator | 00:01:19.697 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.697728 | orchestrator | 00:01:19.697 STDOUT terraform:  + mac_address = (known after apply) 2025-05-06 00:01:19.697763 | orchestrator | 00:01:19.697 STDOUT terraform:  + network_id = (known after apply) 2025-05-06 00:01:19.697798 | orchestrator | 00:01:19.697 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-06 00:01:19.697835 | orchestrator | 00:01:19.697 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-06 00:01:19.697871 | orchestrator | 00:01:19.697 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.697906 | orchestrator | 00:01:19.697 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-06 00:01:19.697942 | orchestrator | 00:01:19.697 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.697961 | orchestrator | 00:01:19.697 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.697988 | orchestrator | 00:01:19.697 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-06 00:01:19.697995 | orchestrator | 00:01:19.697 STDOUT terraform:  } 2025-05-06 00:01:19.698028 | orchestrator | 00:01:19.697 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.698061 | orchestrator | 00:01:19.698 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-06 00:01:19.698068 | orchestrator | 00:01:19.698 STDOUT terraform:  } 2025-05-06 00:01:19.698094 | orchestrator | 00:01:19.698 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.698123 | orchestrator | 00:01:19.698 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-06 00:01:19.698130 | orchestrator | 00:01:19.698 STDOUT terraform:  } 2025-05-06 00:01:19.698154 | orchestrator | 00:01:19.698 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.698183 | orchestrator | 00:01:19.698 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-06 00:01:19.698190 | orchestrator | 00:01:19.698 STDOUT terraform:  } 2025-05-06 00:01:19.698218 | orchestrator | 00:01:19.698 STDOUT terraform:  + binding (known after apply) 2025-05-06 00:01:19.698225 | orchestrator | 00:01:19.698 STDOUT terraform:  + fixed_ip { 2025-05-06 00:01:19.698255 | orchestrator | 00:01:19.698 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-06 00:01:19.698286 | orchestrator | 00:01:19.698 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-06 00:01:19.698293 | orchestrator | 00:01:19.698 STDOUT terraform:  } 2025-05-06 00:01:19.698300 | orchestrator | 00:01:19.698 STDOUT terraform:  } 2025-05-06 00:01:19.698355 | orchestrator | 00:01:19.698 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-06 00:01:19.698399 | orchestrator | 00:01:19.698 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-06 00:01:19.698449 | orchestrator | 00:01:19.698 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-06 00:01:19.698487 | orchestrator | 00:01:19.698 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-06 00:01:19.698520 | orchestrator | 00:01:19.698 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-06 00:01:19.698607 | orchestrator | 00:01:19.698 STDOUT terraform:  + all_tags = (known after apply) 2025-05-06 00:01:19.698616 | orchestrator | 00:01:19.698 STDOUT terraform:  + device_id = (known after apply) 2025-05-06 00:01:19.698657 | orchestrator | 00:01:19.698 STDOUT terraform:  + device_owner = (known after apply) 2025-05-06 00:01:19.698695 | orchestrator | 00:01:19.698 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-06 00:01:19.698729 | orchestrator | 00:01:19.698 STDOUT terraform:  + dns_name = (known after apply) 2025-05-06 00:01:19.698766 | orchestrator | 00:01:19.698 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.698802 | orchestrator | 00:01:19.698 STDOUT terraform:  + mac_address = (known after apply) 2025-05-06 00:01:19.698840 | orchestrator | 00:01:19.698 STDOUT terraform:  + network_id = (known after apply) 2025-05-06 00:01:19.698876 | orchestrator | 00:01:19.698 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-06 00:01:19.698911 | orchestrator | 00:01:19.698 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-06 00:01:19.698947 | orchestrator | 00:01:19.698 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.698982 | orchestrator | 00:01:19.698 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-06 00:01:19.699021 | orchestrator | 00:01:19.698 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.699030 | orchestrator | 00:01:19.699 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.699063 | orchestrator | 00:01:19.699 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-06 00:01:19.699070 | orchestrator | 00:01:19.699 STDOUT terraform:  } 2025-05-06 00:01:19.699096 | orchestrator | 00:01:19.699 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.699125 | orchestrator | 00:01:19.699 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-06 00:01:19.699132 | orchestrator | 00:01:19.699 STDOUT terraform:  } 2025-05-06 00:01:19.699156 | orchestrator | 00:01:19.699 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.699186 | orchestrator | 00:01:19.699 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-06 00:01:19.699193 | orchestrator | 00:01:19.699 STDOUT terraform:  } 2025-05-06 00:01:19.699219 | orchestrator | 00:01:19.699 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.699245 | orchestrator | 00:01:19.699 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-06 00:01:19.699253 | orchestrator | 00:01:19.699 STDOUT terraform:  } 2025-05-06 00:01:19.699279 | orchestrator | 00:01:19.699 STDOUT terraform:  + binding (known after apply) 2025-05-06 00:01:19.699286 | orchestrator | 00:01:19.699 STDOUT terraform:  + fixed_ip { 2025-05-06 00:01:19.699316 | orchestrator | 00:01:19.699 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-06 00:01:19.699347 | orchestrator | 00:01:19.699 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-06 00:01:19.699354 | orchestrator | 00:01:19.699 STDOUT terraform:  } 2025-05-06 00:01:19.699377 | orchestrator | 00:01:19.699 STDOUT terraform:  } 2025-05-06 00:01:19.699431 | orchestrator | 00:01:19.699 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-06 00:01:19.699485 | orchestrator | 00:01:19.699 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-06 00:01:19.699522 | orchestrator | 00:01:19.699 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-06 00:01:19.699559 | orchestrator | 00:01:19.699 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-06 00:01:19.699594 | orchestrator | 00:01:19.699 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-06 00:01:19.699630 | orchestrator | 00:01:19.699 STDOUT terraform:  + all_tags = (known after apply) 2025-05-06 00:01:19.699667 | orchestrator | 00:01:19.699 STDOUT terraform:  + device_id = (known after apply) 2025-05-06 00:01:19.699702 | orchestrator | 00:01:19.699 STDOUT terraform:  + device_owner = (known after apply) 2025-05-06 00:01:19.699737 | orchestrator | 00:01:19.699 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-06 00:01:19.699774 | orchestrator | 00:01:19.699 STDOUT terraform:  + dns_name = (known after apply) 2025-05-06 00:01:19.699812 | orchestrator | 00:01:19.699 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.699846 | orchestrator | 00:01:19.699 STDOUT terraform:  + mac_address = (known after apply) 2025-05-06 00:01:19.699883 | orchestrator | 00:01:19.699 STDOUT terraform:  + network_id = (known after apply) 2025-05-06 00:01:19.699918 | orchestrator | 00:01:19.699 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-06 00:01:19.699954 | orchestrator | 00:01:19.699 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-06 00:01:19.699992 | orchestrator | 00:01:19.699 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.700026 | orchestrator | 00:01:19.699 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-06 00:01:19.702152 | orchestrator | 00:01:19.700 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.702227 | orchestrator | 00:01:19.700 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.702243 | orchestrator | 00:01:19.700 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-06 00:01:19.702256 | orchestrator | 00:01:19.700 STDOUT terraform:  } 2025-05-06 00:01:19.702268 | orchestrator | 00:01:19.700 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.702279 | orchestrator | 00:01:19.700 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-06 00:01:19.702291 | orchestrator | 00:01:19.700 STDOUT terraform:  } 2025-05-06 00:01:19.702302 | orchestrator | 00:01:19.700 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.702313 | orchestrator | 00:01:19.700 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-06 00:01:19.702325 | orchestrator | 00:01:19.700 STDOUT terraform:  } 2025-05-06 00:01:19.702336 | orchestrator | 00:01:19.700 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.702347 | orchestrator | 00:01:19.700 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-06 00:01:19.702358 | orchestrator | 00:01:19.700 STDOUT terraform:  } 2025-05-06 00:01:19.702369 | orchestrator | 00:01:19.700 STDOUT terraform:  + binding (known after apply) 2025-05-06 00:01:19.702380 | orchestrator | 00:01:19.700 STDOUT terraform:  + fixed_ip { 2025-05-06 00:01:19.702392 | orchestrator | 00:01:19.700 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-06 00:01:19.702438 | orchestrator | 00:01:19.700 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-06 00:01:19.702451 | orchestrator | 00:01:19.700 STDOUT terraform:  } 2025-05-06 00:01:19.702463 | orchestrator | 00:01:19.700 STDOUT terraform:  } 2025-05-06 00:01:19.702475 | orchestrator | 00:01:19.700 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-06 00:01:19.702487 | orchestrator | 00:01:19.700 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-06 00:01:19.702499 | orchestrator | 00:01:19.700 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-06 00:01:19.702510 | orchestrator | 00:01:19.700 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-06 00:01:19.702522 | orchestrator | 00:01:19.700 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-06 00:01:19.702533 | orchestrator | 00:01:19.700 STDOUT terraform:  + all_tags = (known after apply) 2025-05-06 00:01:19.702544 | orchestrator | 00:01:19.700 STDOUT terraform:  + device_id = (known after apply) 2025-05-06 00:01:19.702555 | orchestrator | 00:01:19.700 STDOUT terraform:  + device_owner = (known after apply) 2025-05-06 00:01:19.702567 | orchestrator | 00:01:19.700 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-06 00:01:19.702578 | orchestrator | 00:01:19.700 STDOUT terraform:  + dns_name = (known after apply) 2025-05-06 00:01:19.702589 | orchestrator | 00:01:19.700 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.702600 | orchestrator | 00:01:19.700 STDOUT terraform:  + mac_address = (known after apply) 2025-05-06 00:01:19.702611 | orchestrator | 00:01:19.700 STDOUT terraform:  + network_id = (known after apply) 2025-05-06 00:01:19.702622 | orchestrator | 00:01:19.700 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-06 00:01:19.702633 | orchestrator | 00:01:19.700 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-06 00:01:19.702644 | orchestrator | 00:01:19.700 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.702656 | orchestrator | 00:01:19.701 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-06 00:01:19.702667 | orchestrator | 00:01:19.701 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.702678 | orchestrator | 00:01:19.701 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.702708 | orchestrator | 00:01:19.701 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-06 00:01:19.702721 | orchestrator | 00:01:19.701 STDOUT terraform:  } 2025-05-06 00:01:19.702732 | orchestrator | 00:01:19.701 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.702743 | orchestrator | 00:01:19.701 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-06 00:01:19.702754 | orchestrator | 00:01:19.701 STDOUT terraform:  } 2025-05-06 00:01:19.702773 | orchestrator | 00:01:19.701 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.702785 | orchestrator | 00:01:19.701 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-06 00:01:19.702796 | orchestrator | 00:01:19.701 STDOUT terraform:  } 2025-05-06 00:01:19.702813 | orchestrator | 00:01:19.701 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.702825 | orchestrator | 00:01:19.701 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-06 00:01:19.702836 | orchestrator | 00:01:19.701 STDOUT terraform:  } 2025-05-06 00:01:19.702847 | orchestrator | 00:01:19.701 STDOUT terraform:  + binding (known after apply) 2025-05-06 00:01:19.702858 | orchestrator | 00:01:19.701 STDOUT terraform:  + fixed_ip { 2025-05-06 00:01:19.702870 | orchestrator | 00:01:19.701 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-06 00:01:19.702882 | orchestrator | 00:01:19.701 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-06 00:01:19.702893 | orchestrator | 00:01:19.701 STDOUT terraform:  } 2025-05-06 00:01:19.702904 | orchestrator | 00:01:19.701 STDOUT terraform:  } 2025-05-06 00:01:19.702915 | orchestrator | 00:01:19.701 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-06 00:01:19.702926 | orchestrator | 00:01:19.701 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-06 00:01:19.702938 | orchestrator | 00:01:19.701 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-06 00:01:19.702949 | orchestrator | 00:01:19.701 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-06 00:01:19.702960 | orchestrator | 00:01:19.701 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-06 00:01:19.702971 | orchestrator | 00:01:19.701 STDOUT terraform:  + all_tags = (known after apply) 2025-05-06 00:01:19.702982 | orchestrator | 00:01:19.701 STDOUT terraform:  + device_id = (known after apply) 2025-05-06 00:01:19.702993 | orchestrator | 00:01:19.701 STDOUT terraform:  + device_owner = (known after apply) 2025-05-06 00:01:19.703004 | orchestrator | 00:01:19.701 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-06 00:01:19.703020 | orchestrator | 00:01:19.701 STDOUT terraform:  + dns_name = (known after apply) 2025-05-06 00:01:19.703031 | orchestrator | 00:01:19.701 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.703043 | orchestrator | 00:01:19.701 STDOUT terraform:  + mac_address = (known after apply) 2025-05-06 00:01:19.703054 | orchestrator | 00:01:19.701 STDOUT terraform:  + network_id = (known after apply) 2025-05-06 00:01:19.703066 | orchestrator | 00:01:19.701 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-06 00:01:19.703077 | orchestrator | 00:01:19.701 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-06 00:01:19.703088 | orchestrator | 00:01:19.701 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.703099 | orchestrator | 00:01:19.701 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-06 00:01:19.703110 | orchestrator | 00:01:19.701 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.703121 | orchestrator | 00:01:19.702 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.703133 | orchestrator | 00:01:19.702 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-06 00:01:19.703148 | orchestrator | 00:01:19.702 STDOUT terraform:  } 2025-05-06 00:01:19.703171 | orchestrator | 00:01:19.702 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.703182 | orchestrator | 00:01:19.702 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-06 00:01:19.703199 | orchestrator | 00:01:19.702 STDOUT terraform:  } 2025-05-06 00:01:19.703211 | orchestrator | 00:01:19.702 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.703222 | orchestrator | 00:01:19.702 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-06 00:01:19.703233 | orchestrator | 00:01:19.702 STDOUT terraform:  } 2025-05-06 00:01:19.703245 | orchestrator | 00:01:19.702 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.703256 | orchestrator | 00:01:19.702 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-06 00:01:19.703267 | orchestrator | 00:01:19.702 STDOUT terraform:  } 2025-05-06 00:01:19.703278 | orchestrator | 00:01:19.702 STDOUT terraform:  + binding (known after apply) 2025-05-06 00:01:19.703290 | orchestrator | 00:01:19.702 STDOUT terraform:  + fixed_ip { 2025-05-06 00:01:19.703300 | orchestrator | 00:01:19.702 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-06 00:01:19.703312 | orchestrator | 00:01:19.702 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-06 00:01:19.703323 | orchestrator | 00:01:19.702 STDOUT terraform:  } 2025-05-06 00:01:19.703334 | orchestrator | 00:01:19.702 STDOUT terraform:  } 2025-05-06 00:01:19.703345 | orchestrator | 00:01:19.702 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-06 00:01:19.703356 | orchestrator | 00:01:19.702 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-06 00:01:19.703368 | orchestrator | 00:01:19.702 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-06 00:01:19.703379 | orchestrator | 00:01:19.702 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-06 00:01:19.703390 | orchestrator | 00:01:19.702 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-06 00:01:19.703401 | orchestrator | 00:01:19.702 STDOUT terraform:  + all_tags = (known after apply) 2025-05-06 00:01:19.703412 | orchestrator | 00:01:19.702 STDOUT terraform:  + device_id = (known after apply) 2025-05-06 00:01:19.703438 | orchestrator | 00:01:19.702 STDOUT terraform:  + device_owner = (known after apply) 2025-05-06 00:01:19.703450 | orchestrator | 00:01:19.702 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-06 00:01:19.703461 | orchestrator | 00:01:19.702 STDOUT terraform:  + dns_name = (known after apply) 2025-05-06 00:01:19.703472 | orchestrator | 00:01:19.702 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.703483 | orchestrator | 00:01:19.702 STDOUT terraform:  + mac_address = (known after apply) 2025-05-06 00:01:19.703494 | orchestrator | 00:01:19.702 STDOUT terraform:  + network_id = (known after apply) 2025-05-06 00:01:19.703505 | orchestrator | 00:01:19.702 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-06 00:01:19.703516 | orchestrator | 00:01:19.702 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-06 00:01:19.703533 | orchestrator | 00:01:19.702 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.703544 | orchestrator | 00:01:19.702 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-06 00:01:19.703555 | orchestrator | 00:01:19.702 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.703566 | orchestrator | 00:01:19.702 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.703582 | orchestrator | 00:01:19.702 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-06 00:01:19.703594 | orchestrator | 00:01:19.702 STDOUT terraform:  } 2025-05-06 00:01:19.703605 | orchestrator | 00:01:19.702 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.703617 | orchestrator | 00:01:19.703 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-06 00:01:19.703629 | orchestrator | 00:01:19.703 STDOUT terraform:  } 2025-05-06 00:01:19.703640 | orchestrator | 00:01:19.703 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.703652 | orchestrator | 00:01:19.703 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-06 00:01:19.703668 | orchestrator | 00:01:19.703 STDOUT terraform:  } 2025-05-06 00:01:19.703713 | orchestrator | 00:01:19.703 STDOUT terraform:  + allowed_address_pairs { 2025-05-06 00:01:19.703726 | orchestrator | 00:01:19.703 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-06 00:01:19.703737 | orchestrator | 00:01:19.703 STDOUT terraform:  } 2025-05-06 00:01:19.703748 | orchestrator | 00:01:19.703 STDOUT terraform:  + binding (known after apply) 2025-05-06 00:01:19.703760 | orchestrator | 00:01:19.703 STDOUT terraform:  + fixed_ip { 2025-05-06 00:01:19.703771 | orchestrator | 00:01:19.703 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-06 00:01:19.703787 | orchestrator | 00:01:19.703 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-06 00:01:19.703798 | orchestrator | 00:01:19.703 STDOUT terraform:  } 2025-05-06 00:01:19.703810 | orchestrator | 00:01:19.703 STDOUT terraform:  } 2025-05-06 00:01:19.703821 | orchestrator | 00:01:19.703 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-06 00:01:19.703837 | orchestrator | 00:01:19.703 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-06 00:01:19.703848 | orchestrator | 00:01:19.703 STDOUT terraform:  + force_destroy = false 2025-05-06 00:01:19.703860 | orchestrator | 00:01:19.703 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.703871 | orchestrator | 00:01:19.703 STDOUT terraform:  + port_id = (known after apply) 2025-05-06 00:01:19.703882 | orchestrator | 00:01:19.703 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.703893 | orchestrator | 00:01:19.703 STDOUT terraform:  + router_id = (known after apply) 2025-05-06 00:01:19.703904 | orchestrator | 00:01:19.703 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-06 00:01:19.703915 | orchestrator | 00:01:19.703 STDOUT terraform:  } 2025-05-06 00:01:19.703926 | orchestrator | 00:01:19.703 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-06 00:01:19.703937 | orchestrator | 00:01:19.703 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-06 00:01:19.703954 | orchestrator | 00:01:19.703 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-06 00:01:19.703965 | orchestrator | 00:01:19.703 STDOUT terraform:  + all_tags = (known after apply) 2025-05-06 00:01:19.703977 | orchestrator | 00:01:19.703 STDOUT terraform:  + availability_zone_hints = [ 2025-05-06 00:01:19.703988 | orchestrator | 00:01:19.703 STDOUT terraform:  + "nova", 2025-05-06 00:01:19.703999 | orchestrator | 00:01:19.703 STDOUT terraform:  ] 2025-05-06 00:01:19.704014 | orchestrator | 00:01:19.703 STDOUT terraform:  + distributed = (known after apply) 2025-05-06 00:01:19.704025 | orchestrator | 00:01:19.703 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-06 00:01:19.704036 | orchestrator | 00:01:19.703 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-06 00:01:19.704047 | orchestrator | 00:01:19.703 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.704058 | orchestrator | 00:01:19.703 STDOUT terraform:  + name = "testbed" 2025-05-06 00:01:19.704070 | orchestrator | 00:01:19.703 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.704081 | orchestrator | 00:01:19.703 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.704092 | orchestrator | 00:01:19.703 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-06 00:01:19.704103 | orchestrator | 00:01:19.703 STDOUT terraform:  } 2025-05-06 00:01:19.704114 | orchestrator | 00:01:19.703 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-06 00:01:19.704129 | orchestrator | 00:01:19.703 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-06 00:01:19.704141 | orchestrator | 00:01:19.704 STDOUT terraform:  + description = "ssh" 2025-05-06 00:01:19.704152 | orchestrator | 00:01:19.704 STDOUT terraform:  + direction = "ingress" 2025-05-06 00:01:19.704163 | orchestrator | 00:01:19.704 STDOUT terraform:  + ethertype = "IPv4" 2025-05-06 00:01:19.704174 | orchestrator | 00:01:19.704 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.704186 | orchestrator | 00:01:19.704 STDOUT terraform:  + port_range_max = 22 2025-05-06 00:01:19.704200 | orchestrator | 00:01:19.704 STDOUT terraform:  + port_range_min = 22 2025-05-06 00:01:19.704211 | orchestrator | 00:01:19.704 STDOUT terraform:  + protocol = "tcp" 2025-05-06 00:01:19.704223 | orchestrator | 00:01:19.704 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.704236 | orchestrator | 00:01:19.704 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-06 00:01:19.704281 | orchestrator | 00:01:19.704 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-06 00:01:19.704297 | orchestrator | 00:01:19.704 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-06 00:01:19.704554 | orchestrator | 00:01:19.704 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.704647 | orchestrator | 00:01:19.704 STDOUT terraform:  } 2025-05-06 00:01:19.704684 | orchestrator | 00:01:19.704 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-06 00:01:19.704728 | orchestrator | 00:01:19.704 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-06 00:01:19.704744 | orchestrator | 00:01:19.704 STDOUT terraform:  + description = "wireguard" 2025-05-06 00:01:19.704766 | orchestrator | 00:01:19.704 STDOUT terraform:  + direction = "ingress" 2025-05-06 00:01:19.704791 | orchestrator | 00:01:19.704 STDOUT terraform:  + ethertype = "IPv4" 2025-05-06 00:01:19.704810 | orchestrator | 00:01:19.704 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.704824 | orchestrator | 00:01:19.704 STDOUT terraform:  + port_range_max = 51820 2025-05-06 00:01:19.704838 | orchestrator | 00:01:19.704 STDOUT terraform:  + port_range_min = 51820 2025-05-06 00:01:19.704852 | orchestrator | 00:01:19.704 STDOUT terraform:  + protocol = "udp" 2025-05-06 00:01:19.704870 | orchestrator | 00:01:19.704 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.704894 | orchestrator | 00:01:19.704 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-06 00:01:19.704920 | orchestrator | 00:01:19.704 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-06 00:01:19.704944 | orchestrator | 00:01:19.704 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-06 00:01:19.704977 | orchestrator | 00:01:19.704 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.705004 | orchestrator | 00:01:19.704 STDOUT terraform:  } 2025-05-06 00:01:19.705029 | orchestrator | 00:01:19.704 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-06 00:01:19.705055 | orchestrator | 00:01:19.704 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-06 00:01:19.705079 | orchestrator | 00:01:19.704 STDOUT terraform:  + direction 2025-05-06 00:01:19.705102 | orchestrator | 00:01:19.704 STDOUT terraform:  = "ingress" 2025-05-06 00:01:19.705126 | orchestrator | 00:01:19.704 STDOUT terraform:  + ethertype = "IPv4" 2025-05-06 00:01:19.705152 | orchestrator | 00:01:19.704 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.705176 | orchestrator | 00:01:19.704 STDOUT terraform:  + protocol = "tcp" 2025-05-06 00:01:19.705191 | orchestrator | 00:01:19.704 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.705206 | orchestrator | 00:01:19.704 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-06 00:01:19.705225 | orchestrator | 00:01:19.704 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-06 00:01:19.705240 | orchestrator | 00:01:19.704 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-06 00:01:19.705254 | orchestrator | 00:01:19.705 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.705268 | orchestrator | 00:01:19.705 STDOUT terraform:  } 2025-05-06 00:01:19.705282 | orchestrator | 00:01:19.705 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-06 00:01:19.705296 | orchestrator | 00:01:19.705 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-06 00:01:19.705322 | orchestrator | 00:01:19.705 STDOUT terraform:  + direction = "ingress" 2025-05-06 00:01:19.705336 | orchestrator | 00:01:19.705 STDOUT terraform:  + ethertype = "IPv4" 2025-05-06 00:01:19.705351 | orchestrator | 00:01:19.705 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.705368 | orchestrator | 00:01:19.705 STDOUT terraform:  + protocol = "udp" 2025-05-06 00:01:19.705400 | orchestrator | 00:01:19.705 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.705414 | orchestrator | 00:01:19.705 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-06 00:01:19.705476 | orchestrator | 00:01:19.705 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-06 00:01:19.705500 | orchestrator | 00:01:19.705 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-06 00:01:19.705523 | orchestrator | 00:01:19.705 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.705538 | orchestrator | 00:01:19.705 STDOUT terraform:  } 2025-05-06 00:01:19.705559 | orchestrator | 00:01:19.705 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-06 00:01:19.705573 | orchestrator | 00:01:19.705 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-06 00:01:19.705593 | orchestrator | 00:01:19.705 STDOUT terraform:  + direction = "ingress" 2025-05-06 00:01:19.705611 | orchestrator | 00:01:19.705 STDOUT terraform:  + ethertype = "IPv4" 2025-05-06 00:01:19.705626 | orchestrator | 00:01:19.705 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.705640 | orchestrator | 00:01:19.705 STDOUT terraform:  + protocol = "icmp" 2025-05-06 00:01:19.705654 | orchestrator | 00:01:19.705 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.705672 | orchestrator | 00:01:19.705 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-06 00:01:19.705715 | orchestrator | 00:01:19.705 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-06 00:01:19.705730 | orchestrator | 00:01:19.705 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-06 00:01:19.705748 | orchestrator | 00:01:19.705 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.705762 | orchestrator | 00:01:19.705 STDOUT terraform:  } 2025-05-06 00:01:19.705779 | orchestrator | 00:01:19.705 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-06 00:01:19.705838 | orchestrator | 00:01:19.705 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-06 00:01:19.705858 | orchestrator | 00:01:19.705 STDOUT terraform:  + direction = "ingress" 2025-05-06 00:01:19.705889 | orchestrator | 00:01:19.705 STDOUT terraform:  + ethertype = "IPv4" 2025-05-06 00:01:19.705912 | orchestrator | 00:01:19.705 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.705934 | orchestrator | 00:01:19.705 STDOUT terraform:  + protocol = "tcp" 2025-05-06 00:01:19.705952 | orchestrator | 00:01:19.705 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.705997 | orchestrator | 00:01:19.705 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-06 00:01:19.706064 | orchestrator | 00:01:19.705 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-06 00:01:19.706096 | orchestrator | 00:01:19.705 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-06 00:01:19.706131 | orchestrator | 00:01:19.705 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.706157 | orchestrator | 00:01:19.706 STDOUT terraform:  } 2025-05-06 00:01:19.706185 | orchestrator | 00:01:19.706 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-06 00:01:19.706221 | orchestrator | 00:01:19.706 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-06 00:01:19.706243 | orchestrator | 00:01:19.706 STDOUT terraform:  + direction = "ingress" 2025-05-06 00:01:19.706257 | orchestrator | 00:01:19.706 STDOUT terraform:  + ethertype = "IPv4" 2025-05-06 00:01:19.706271 | orchestrator | 00:01:19.706 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.706286 | orchestrator | 00:01:19.706 STDOUT terraform:  + protocol = "udp" 2025-05-06 00:01:19.706304 | orchestrator | 00:01:19.706 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.706318 | orchestrator | 00:01:19.706 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-06 00:01:19.706331 | orchestrator | 00:01:19.706 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-06 00:01:19.706349 | orchestrator | 00:01:19.706 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-06 00:01:19.706450 | orchestrator | 00:01:19.706 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.706480 | orchestrator | 00:01:19.706 STDOUT terraform:  } 2025-05-06 00:01:19.706511 | orchestrator | 00:01:19.706 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-06 00:01:19.706527 | orchestrator | 00:01:19.706 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-06 00:01:19.706541 | orchestrator | 00:01:19.706 STDOUT terraform:  + direction = "ingress" 2025-05-06 00:01:19.706555 | orchestrator | 00:01:19.706 STDOUT terraform:  + ethertype = "IPv4" 2025-05-06 00:01:19.706573 | orchestrator | 00:01:19.706 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.706616 | orchestrator | 00:01:19.706 STDOUT terraform:  + protocol = "icmp" 2025-05-06 00:01:19.706632 | orchestrator | 00:01:19.706 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.706651 | orchestrator | 00:01:19.706 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-06 00:01:19.706701 | orchestrator | 00:01:19.706 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-06 00:01:19.706716 | orchestrator | 00:01:19.706 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-06 00:01:19.706734 | orchestrator | 00:01:19.706 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.706873 | orchestrator | 00:01:19.706 STDOUT terraform:  } 2025-05-06 00:01:19.706893 | orchestrator | 00:01:19.706 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-06 00:01:19.706925 | orchestrator | 00:01:19.706 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-06 00:01:19.707148 | orchestrator | 00:01:19.707 STDOUT terraform:  + description = "vrrp" 2025-05-06 00:01:19.707171 | orchestrator | 00:01:19.707 STDOUT terraform:  + direction = "ingress" 2025-05-06 00:01:19.707201 | orchestrator | 00:01:19.707 STDOUT terraform:  + ethertype = "IPv4" 2025-05-06 00:01:19.707224 | orchestrator | 00:01:19.707 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.707246 | orchestrator | 00:01:19.707 STDOUT terraform:  + protocol = "112" 2025-05-06 00:01:19.707270 | orchestrator | 00:01:19.707 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.707320 | orchestrator | 00:01:19.707 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-06 00:01:19.707335 | orchestrator | 00:01:19.707 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-06 00:01:19.707350 | orchestrator | 00:01:19.707 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-06 00:01:19.707399 | orchestrator | 00:01:19.707 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.707413 | orchestrator | 00:01:19.707 STDOUT terraform:  } 2025-05-06 00:01:19.707461 | orchestrator | 00:01:19.707 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-06 00:01:19.707477 | orchestrator | 00:01:19.707 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-06 00:01:19.707525 | orchestrator | 00:01:19.707 STDOUT terraform:  + all_tags = (known after apply) 2025-05-06 00:01:19.707542 | orchestrator | 00:01:19.707 STDOUT terraform:  + description = "management security group" 2025-05-06 00:01:19.707558 | orchestrator | 00:01:19.707 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.707594 | orchestrator | 00:01:19.707 STDOUT terraform:  + name = "testbed-management" 2025-05-06 00:01:19.707611 | orchestrator | 00:01:19.707 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.707648 | orchestrator | 00:01:19.707 STDOUT terraform:  + stateful = (known after apply) 2025-05-06 00:01:19.707664 | orchestrator | 00:01:19.707 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.707726 | orchestrator | 00:01:19.707 STDOUT terraform:  } 2025-05-06 00:01:19.707744 | orchestrator | 00:01:19.707 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-06 00:01:19.707881 | orchestrator | 00:01:19.707 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-06 00:01:19.707932 | orchestrator | 00:01:19.707 STDOUT terraform:  + all_tags = (known after apply) 2025-05-06 00:01:19.707955 | orchestrator | 00:01:19.707 STDOUT terraform:  + description = "node security group" 2025-05-06 00:01:19.707968 | orchestrator | 00:01:19.707 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.707984 | orchestrator | 00:01:19.707 STDOUT terraform:  + name = "testbed-node" 2025-05-06 00:01:19.708019 | orchestrator | 00:01:19.707 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.708046 | orchestrator | 00:01:19.707 STDOUT terraform:  + stateful = (known after apply) 2025-05-06 00:01:19.708101 | orchestrator | 00:01:19.708 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.708128 | orchestrator | 00:01:19.708 STDOUT terraform:  } 2025-05-06 00:01:19.708166 | orchestrator | 00:01:19.708 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-06 00:01:19.708186 | orchestrator | 00:01:19.708 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-06 00:01:19.708205 | orchestrator | 00:01:19.708 STDOUT terraform:  + all_tags = (known after apply) 2025-05-06 00:01:19.708230 | orchestrator | 00:01:19.708 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-06 00:01:19.708253 | orchestrator | 00:01:19.708 STDOUT terraform:  + dns_nameservers = [ 2025-05-06 00:01:19.708274 | orchestrator | 00:01:19.708 STDOUT terraform:  + "8.8.8.8", 2025-05-06 00:01:19.708292 | orchestrator | 00:01:19.708 STDOUT terraform:  + "9.9.9.9", 2025-05-06 00:01:19.708309 | orchestrator | 00:01:19.708 STDOUT terraform:  ] 2025-05-06 00:01:19.708381 | orchestrator | 00:01:19.708 STDOUT terraform:  + enable_dhcp = true 2025-05-06 00:01:19.708399 | orchestrator | 00:01:19.708 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-06 00:01:19.708464 | orchestrator | 00:01:19.708 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.708481 | orchestrator | 00:01:19.708 STDOUT terraform:  + ip_version = 4 2025-05-06 00:01:19.708500 | orchestrator | 00:01:19.708 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-06 00:01:19.708526 | orchestrator | 00:01:19.708 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-06 00:01:19.708553 | orchestrator | 00:01:19.708 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-06 00:01:19.708578 | orchestrator | 00:01:19.708 STDOUT terraform:  + network_id = (known after apply) 2025-05-06 00:01:19.708633 | orchestrator | 00:01:19.708 STDOUT terraform:  + no_gateway = false 2025-05-06 00:01:19.708714 | orchestrator | 00:01:19.708 STDOUT terraform:  + region = (known after apply) 2025-05-06 00:01:19.708737 | orchestrator | 00:01:19.708 STDOUT terraform:  + service_types = (known after apply) 2025-05-06 00:01:19.708750 | orchestrator | 00:01:19.708 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-06 00:01:19.708763 | orchestrator | 00:01:19.708 STDOUT terraform:  + allocation_pool { 2025-05-06 00:01:19.708776 | orchestrator | 00:01:19.708 STDOUT terraform:  + end = "192.168.31.250" 2025-05-06 00:01:19.708794 | orchestrator | 00:01:19.708 STDOUT terraform:  + start = "192.168.31.200" 2025-05-06 00:01:19.708823 | orchestrator | 00:01:19.708 STDOUT terraform:  } 2025-05-06 00:01:19.708849 | orchestrator | 00:01:19.708 STDOUT terraform:  } 2025-05-06 00:01:19.708870 | orchestrator | 00:01:19.708 STDOUT terraform:  # terraform_data.image will be created 2025-05-06 00:01:19.708891 | orchestrator | 00:01:19.708 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-06 00:01:19.708919 | orchestrator | 00:01:19.708 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.708948 | orchestrator | 00:01:19.708 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-06 00:01:19.708961 | orchestrator | 00:01:19.708 STDOUT terraform:  + output = (known after apply) 2025-05-06 00:01:19.708973 | orchestrator | 00:01:19.708 STDOUT terraform:  } 2025-05-06 00:01:19.708986 | orchestrator | 00:01:19.708 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-06 00:01:19.709002 | orchestrator | 00:01:19.708 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-06 00:01:19.709015 | orchestrator | 00:01:19.708 STDOUT terraform:  + id = (known after apply) 2025-05-06 00:01:19.709027 | orchestrator | 00:01:19.708 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-06 00:01:19.709040 | orchestrator | 00:01:19.708 STDOUT terraform:  + output = (known after apply) 2025-05-06 00:01:19.709055 | orchestrator | 00:01:19.708 STDOUT terraform:  } 2025-05-06 00:01:19.709068 | orchestrator | 00:01:19.709 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-05-06 00:01:19.709080 | orchestrator | 00:01:19.709 STDOUT terraform: Changes to Outputs: 2025-05-06 00:01:19.709096 | orchestrator | 00:01:19.709 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-06 00:01:19.821214 | orchestrator | 00:01:19.709 STDOUT terraform:  + private_key = (sensitive value) 2025-05-06 00:01:19.821343 | orchestrator | 00:01:19.820 STDOUT terraform: terraform_data.image: Creating... 2025-05-06 00:01:19.822071 | orchestrator | 00:01:19.821 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=1974b2ee-9b82-08ff-7e89-3ff31975370a] 2025-05-06 00:01:19.899791 | orchestrator | 00:01:19.899 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-06 00:01:19.900512 | orchestrator | 00:01:19.900 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=5b2c93da-297c-0e00-c764-1877b3e74405] 2025-05-06 00:01:19.918415 | orchestrator | 00:01:19.918 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-06 00:01:19.918685 | orchestrator | 00:01:19.918 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-06 00:01:19.922750 | orchestrator | 00:01:19.922 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-06 00:01:19.923738 | orchestrator | 00:01:19.923 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-06 00:01:19.925097 | orchestrator | 00:01:19.923 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-05-06 00:01:19.925141 | orchestrator | 00:01:19.923 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-06 00:01:19.925152 | orchestrator | 00:01:19.925 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-06 00:01:19.926713 | orchestrator | 00:01:19.926 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-06 00:01:19.927442 | orchestrator | 00:01:19.926 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-05-06 00:01:19.931273 | orchestrator | 00:01:19.931 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-05-06 00:01:20.399807 | orchestrator | 00:01:20.399 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-06 00:01:20.407105 | orchestrator | 00:01:20.405 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-06 00:01:20.414827 | orchestrator | 00:01:20.414 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-06 00:01:20.414900 | orchestrator | 00:01:20.414 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-05-06 00:01:20.515720 | orchestrator | 00:01:20.515 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-05-06 00:01:20.522743 | orchestrator | 00:01:20.522 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-06 00:01:25.747360 | orchestrator | 00:01:25.746 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=6815ff65-0dde-4914-a318-5b843f489ea4] 2025-05-06 00:01:25.754132 | orchestrator | 00:01:25.753 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-05-06 00:01:29.924911 | orchestrator | 00:01:29.924 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-05-06 00:01:29.925062 | orchestrator | 00:01:29.924 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-06 00:01:29.926857 | orchestrator | 00:01:29.926 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-06 00:01:29.927973 | orchestrator | 00:01:29.927 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-06 00:01:29.928107 | orchestrator | 00:01:29.927 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-05-06 00:01:29.932484 | orchestrator | 00:01:29.932 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-05-06 00:01:30.415464 | orchestrator | 00:01:30.414 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-06 00:01:30.416386 | orchestrator | 00:01:30.416 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-05-06 00:01:30.523395 | orchestrator | 00:01:30.523 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-06 00:01:30.526857 | orchestrator | 00:01:30.526 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 11s [id=d2c5f30c-7574-4db2-b6fd-52c11ffcec81] 2025-05-06 00:01:30.535901 | orchestrator | 00:01:30.535 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=f2e4c6c8-e338-4410-96b4-d1d5dab5be16] 2025-05-06 00:01:30.537524 | orchestrator | 00:01:30.537 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 11s [id=cc7f276d-c2ba-4b91-9f6b-a505ec6ab98a] 2025-05-06 00:01:30.539763 | orchestrator | 00:01:30.539 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-06 00:01:30.543724 | orchestrator | 00:01:30.543 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-06 00:01:30.547852 | orchestrator | 00:01:30.547 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-06 00:01:30.551807 | orchestrator | 00:01:30.551 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=dd3ac05d-c575-4080-995d-3bfc9d0012c6] 2025-05-06 00:01:30.560465 | orchestrator | 00:01:30.560 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-05-06 00:01:30.567500 | orchestrator | 00:01:30.567 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=7066bed1-b6f5-4fc6-91d4-16dfe41e1882] 2025-05-06 00:01:30.568820 | orchestrator | 00:01:30.568 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 11s [id=11dd9f49-985b-4711-8afc-7de7cde1776f] 2025-05-06 00:01:30.574123 | orchestrator | 00:01:30.573 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-05-06 00:01:30.576569 | orchestrator | 00:01:30.576 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-05-06 00:01:30.612840 | orchestrator | 00:01:30.612 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 11s [id=eefa0fb1-6e32-4be6-9371-3c36667f9eb4] 2025-05-06 00:01:30.619599 | orchestrator | 00:01:30.619 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-06 00:01:30.630077 | orchestrator | 00:01:30.629 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=bc0c56a8-1377-4a36-857b-86c78b746055] 2025-05-06 00:01:30.636221 | orchestrator | 00:01:30.636 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-05-06 00:01:30.691087 | orchestrator | 00:01:30.690 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=4ae31ae6-cfcf-47bb-94a3-29249ee0671c] 2025-05-06 00:01:30.702648 | orchestrator | 00:01:30.702 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-06 00:01:35.756104 | orchestrator | 00:01:35.755 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-05-06 00:01:35.932132 | orchestrator | 00:01:35.931 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 10s [id=c3e2c64f-9688-4cad-bb81-b3a7d150bd8b] 2025-05-06 00:01:35.941832 | orchestrator | 00:01:35.941 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-06 00:01:40.541654 | orchestrator | 00:01:40.541 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-06 00:01:40.544728 | orchestrator | 00:01:40.544 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-06 00:01:40.549063 | orchestrator | 00:01:40.548 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-06 00:01:40.561598 | orchestrator | 00:01:40.561 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-05-06 00:01:40.574837 | orchestrator | 00:01:40.574 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-05-06 00:01:40.576900 | orchestrator | 00:01:40.576 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-05-06 00:01:40.620538 | orchestrator | 00:01:40.620 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-06 00:01:40.637532 | orchestrator | 00:01:40.637 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-05-06 00:01:40.703990 | orchestrator | 00:01:40.703 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-06 00:01:40.724101 | orchestrator | 00:01:40.723 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=8c0721df-98b6-45a8-8372-f184b99eacbe] 2025-05-06 00:01:40.851375 | orchestrator | 00:01:40.850 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=b13833ce-dbae-48be-b135-3251cb983a77] 2025-05-06 00:01:40.854534 | orchestrator | 00:01:40.853 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 10s [id=a5a4c6fa-807d-44c7-a556-c4522912d679] 2025-05-06 00:01:40.856950 | orchestrator | 00:01:40.854 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=1e73239c-12d8-4b54-bea1-88c93f0679a4] 2025-05-06 00:01:40.856999 | orchestrator | 00:01:40.856 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 10s [id=9f4cae81-5600-43ad-ae81-4d2d3f64aa06] 2025-05-06 00:01:40.858367 | orchestrator | 00:01:40.857 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=1c7d9a9a-015d-4c6e-aa25-f0276745bfc1] 2025-05-06 00:01:40.866851 | orchestrator | 00:01:40.857 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 10s [id=db071690-0f8e-4535-a70c-dc0b8d604c8e] 2025-05-06 00:01:40.866935 | orchestrator | 00:01:40.858 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 10s [id=7e976783-2213-433c-91fb-66c729e68827] 2025-05-06 00:01:40.866962 | orchestrator | 00:01:40.866 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-06 00:01:40.871954 | orchestrator | 00:01:40.866 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-06 00:01:40.872009 | orchestrator | 00:01:40.871 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-06 00:01:40.873984 | orchestrator | 00:01:40.871 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-06 00:01:40.874048 | orchestrator | 00:01:40.873 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-06 00:01:40.876833 | orchestrator | 00:01:40.876 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-06 00:01:40.885314 | orchestrator | 00:01:40.885 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=5a11af06631e8d14d269f0f8a86371a6843bbe63] 2025-05-06 00:01:40.885801 | orchestrator | 00:01:40.885 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-06 00:01:40.885820 | orchestrator | 00:01:40.885 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-06 00:01:40.891387 | orchestrator | 00:01:40.891 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=5ae599a0d97df9d1767bfb9cc060dd0409efa9ce] 2025-05-06 00:01:41.057125 | orchestrator | 00:01:41.056 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=527d5616-4d3e-4454-846d-b66391bf5247] 2025-05-06 00:01:45.943558 | orchestrator | 00:01:45.943 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-06 00:01:46.276521 | orchestrator | 00:01:46.276 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=b7536583-7396-4238-bfd9-176b53234dc0] 2025-05-06 00:01:47.279079 | orchestrator | 00:01:47.278 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=525fd68c-4955-42a6-8731-3aacb30c9df7] 2025-05-06 00:01:47.285221 | orchestrator | 00:01:47.285 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-06 00:01:50.867588 | orchestrator | 00:01:50.867 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-06 00:01:50.870691 | orchestrator | 00:01:50.870 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-06 00:01:50.870892 | orchestrator | 00:01:50.870 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-06 00:01:50.873132 | orchestrator | 00:01:50.872 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-06 00:01:50.879595 | orchestrator | 00:01:50.879 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-06 00:01:51.241884 | orchestrator | 00:01:51.241 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8] 2025-05-06 00:01:51.246546 | orchestrator | 00:01:51.246 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=e738e251-d306-48ee-8a06-82586811a686] 2025-05-06 00:01:51.261470 | orchestrator | 00:01:51.261 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=971680de-ee79-4aff-976e-b13f7aba5834] 2025-05-06 00:01:51.263147 | orchestrator | 00:01:51.262 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=8a4fdaae-8037-4dd2-82a3-3a1a9f1ae042] 2025-05-06 00:01:51.272950 | orchestrator | 00:01:51.272 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=24b526a0-8758-48c4-be56-753d9cb3cc4b] 2025-05-06 00:01:54.033285 | orchestrator | 00:01:54.032 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=b6ac76cd-38d0-4924-babe-a4fe8284a2fe] 2025-05-06 00:01:54.037896 | orchestrator | 00:01:54.037 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-06 00:01:54.038929 | orchestrator | 00:01:54.038 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-06 00:01:54.044263 | orchestrator | 00:01:54.044 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-06 00:01:54.161081 | orchestrator | 00:01:54.160 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=ded80acd-b73e-48a4-a522-2efc4b26888f] 2025-05-06 00:01:54.169309 | orchestrator | 00:01:54.169 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=808d43ce-127f-4500-bdf5-511db0dec88e] 2025-05-06 00:01:54.182232 | orchestrator | 00:01:54.181 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-06 00:01:54.182573 | orchestrator | 00:01:54.182 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-06 00:01:54.183385 | orchestrator | 00:01:54.183 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-06 00:01:54.187172 | orchestrator | 00:01:54.183 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-06 00:01:54.187259 | orchestrator | 00:01:54.186 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-06 00:01:54.187732 | orchestrator | 00:01:54.187 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-06 00:01:54.188904 | orchestrator | 00:01:54.187 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-06 00:01:54.188966 | orchestrator | 00:01:54.188 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-06 00:01:54.189510 | orchestrator | 00:01:54.189 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-06 00:01:54.298935 | orchestrator | 00:01:54.298 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=f512b278-c2b6-4b57-9022-550cf50a4493] 2025-05-06 00:01:54.306820 | orchestrator | 00:01:54.306 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-06 00:01:54.426319 | orchestrator | 00:01:54.425 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=55e257cb-be6d-4cf2-8202-340917f38f2e] 2025-05-06 00:01:54.441666 | orchestrator | 00:01:54.441 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-06 00:01:54.563094 | orchestrator | 00:01:54.562 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=b0e6fa1a-cb52-42df-a70d-cf202c945372] 2025-05-06 00:01:54.582501 | orchestrator | 00:01:54.582 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-06 00:01:55.348977 | orchestrator | 00:01:55.348 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=ecde53b5-b505-41e9-a743-65dbbbc030a2] 2025-05-06 00:01:55.355559 | orchestrator | 00:01:55.355 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-06 00:01:55.474836 | orchestrator | 00:01:55.474 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=854ad677-1793-48ee-b66b-947095396eb6] 2025-05-06 00:01:55.488944 | orchestrator | 00:01:55.488 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-06 00:01:55.513963 | orchestrator | 00:01:55.513 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=b66f505d-51fa-4dc1-8c92-2df47fc4cfa7] 2025-05-06 00:01:55.528268 | orchestrator | 00:01:55.528 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-06 00:01:55.605553 | orchestrator | 00:01:55.605 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=2171c2f1-653d-4bc7-8a10-f0a3397d5b1c] 2025-05-06 00:01:55.626884 | orchestrator | 00:01:55.626 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-06 00:01:55.721791 | orchestrator | 00:01:55.721 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=3e737937-89a8-47ee-83a7-c0318c665656] 2025-05-06 00:01:55.940130 | orchestrator | 00:01:55.939 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=c01695e4-4970-4a15-b9cf-fe2a3b8c2926] 2025-05-06 00:01:59.814076 | orchestrator | 00:01:59.813 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=4f635ea9-f231-4607-a090-335c3eebcbff] 2025-05-06 00:02:00.084306 | orchestrator | 00:02:00.083 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=8d83ea38-33a9-47e2-ae27-0e5d3aeae834] 2025-05-06 00:02:00.193926 | orchestrator | 00:02:00.193 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 5s [id=2ff8065f-94c4-4615-b287-f98cc34031da] 2025-05-06 00:02:00.205664 | orchestrator | 00:02:00.205 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=8449a8d9-78d8-4623-95ea-31315fad994e] 2025-05-06 00:02:01.057579 | orchestrator | 00:02:01.057 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=e902e7c8-820f-4e06-bb39-81c3f3fbab6b] 2025-05-06 00:02:01.414369 | orchestrator | 00:02:01.413 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 5s [id=88c70c13-dc56-45f9-8a30-8a9e3287c880] 2025-05-06 00:02:01.448466 | orchestrator | 00:02:01.448 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 5s [id=ee17a604-c8a6-460a-a76e-053862db9a23] 2025-05-06 00:02:01.769823 | orchestrator | 00:02:01.769 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=435b5ca4-006a-4b56-bb8d-6871bb4e80b3] 2025-05-06 00:02:01.793074 | orchestrator | 00:02:01.791 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-06 00:02:01.806367 | orchestrator | 00:02:01.806 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-06 00:02:01.806557 | orchestrator | 00:02:01.806 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-06 00:02:01.811396 | orchestrator | 00:02:01.811 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-06 00:02:01.813954 | orchestrator | 00:02:01.813 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-06 00:02:01.815954 | orchestrator | 00:02:01.815 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-06 00:02:01.819117 | orchestrator | 00:02:01.819 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-06 00:02:09.084900 | orchestrator | 00:02:09.084 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=f67b049a-1b32-4a1f-9009-a3851a19c3f6] 2025-05-06 00:02:09.098615 | orchestrator | 00:02:09.098 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-06 00:02:09.102211 | orchestrator | 00:02:09.101 STDOUT terraform: local_file.inventory: Creating... 2025-05-06 00:02:09.106400 | orchestrator | 00:02:09.101 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-06 00:02:09.106488 | orchestrator | 00:02:09.106 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=d66e0eaf3c57f044fce41112e6ed1f49b05fbf87] 2025-05-06 00:02:09.107295 | orchestrator | 00:02:09.107 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=b080f76bd256cb0f34d38ca6f47fd434a10445cc] 2025-05-06 00:02:09.658964 | orchestrator | 00:02:09.658 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=f67b049a-1b32-4a1f-9009-a3851a19c3f6] 2025-05-06 00:02:11.807917 | orchestrator | 00:02:11.807 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-06 00:02:11.808045 | orchestrator | 00:02:11.807 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-06 00:02:11.815814 | orchestrator | 00:02:11.815 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-06 00:02:11.815930 | orchestrator | 00:02:11.815 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-06 00:02:11.819279 | orchestrator | 00:02:11.819 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-06 00:02:11.820242 | orchestrator | 00:02:11.820 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-06 00:02:21.808321 | orchestrator | 00:02:21.807 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-06 00:02:21.808513 | orchestrator | 00:02:21.808 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-06 00:02:21.816770 | orchestrator | 00:02:21.816 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-06 00:02:21.816944 | orchestrator | 00:02:21.816 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-06 00:02:21.819988 | orchestrator | 00:02:21.819 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-06 00:02:21.821231 | orchestrator | 00:02:21.821 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-06 00:02:22.282733 | orchestrator | 00:02:22.282 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 20s [id=f0c5b1cf-510f-496a-9f37-01eee102889a] 2025-05-06 00:02:22.301058 | orchestrator | 00:02:22.300 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=0d63c57e-2747-4c32-85ef-6e771a951de7] 2025-05-06 00:02:22.320253 | orchestrator | 00:02:22.319 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=921ce6eb-4a0c-4718-90d6-9992a01e3dd1] 2025-05-06 00:02:22.333748 | orchestrator | 00:02:22.333 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=9630d2f0-1b9c-4df8-a62e-d0d9514f7b23] 2025-05-06 00:02:22.400547 | orchestrator | 00:02:22.400 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=8db60e1a-352a-496a-b91e-56254c0e0268] 2025-05-06 00:02:22.968605 | orchestrator | 00:02:22.968 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=c7515e22-63e8-4f77-b933-90ed57e1fe8c] 2025-05-06 00:02:22.986910 | orchestrator | 00:02:22.986 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-06 00:02:22.999827 | orchestrator | 00:02:22.999 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=5492706867466012123] 2025-05-06 00:02:23.002247 | orchestrator | 00:02:23.002 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-05-06 00:02:23.002296 | orchestrator | 00:02:23.002 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-05-06 00:02:23.002324 | orchestrator | 00:02:23.002 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-05-06 00:02:23.007538 | orchestrator | 00:02:23.007 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-06 00:02:23.009640 | orchestrator | 00:02:23.009 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-06 00:02:23.018109 | orchestrator | 00:02:23.017 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-05-06 00:02:23.019698 | orchestrator | 00:02:23.019 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-06 00:02:23.021552 | orchestrator | 00:02:23.021 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-06 00:02:23.025970 | orchestrator | 00:02:23.025 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-06 00:02:23.030967 | orchestrator | 00:02:23.030 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-05-06 00:02:28.334385 | orchestrator | 00:02:28.333 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 5s [id=8db60e1a-352a-496a-b91e-56254c0e0268/7e976783-2213-433c-91fb-66c729e68827] 2025-05-06 00:02:28.350852 | orchestrator | 00:02:28.350 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-06 00:02:28.362432 | orchestrator | 00:02:28.361 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=c7515e22-63e8-4f77-b933-90ed57e1fe8c/dd3ac05d-c575-4080-995d-3bfc9d0012c6] 2025-05-06 00:02:28.364592 | orchestrator | 00:02:28.364 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 5s [id=921ce6eb-4a0c-4718-90d6-9992a01e3dd1/eefa0fb1-6e32-4be6-9371-3c36667f9eb4] 2025-05-06 00:02:28.371829 | orchestrator | 00:02:28.371 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-06 00:02:28.373001 | orchestrator | 00:02:28.372 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-05-06 00:02:28.380456 | orchestrator | 00:02:28.380 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 5s [id=0d63c57e-2747-4c32-85ef-6e771a951de7/11dd9f49-985b-4711-8afc-7de7cde1776f] 2025-05-06 00:02:28.384690 | orchestrator | 00:02:28.384 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 5s [id=f0c5b1cf-510f-496a-9f37-01eee102889a/db071690-0f8e-4535-a70c-dc0b8d604c8e] 2025-05-06 00:02:28.385877 | orchestrator | 00:02:28.385 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=0d63c57e-2747-4c32-85ef-6e771a951de7/4ae31ae6-cfcf-47bb-94a3-29249ee0671c] 2025-05-06 00:02:28.394654 | orchestrator | 00:02:28.394 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-05-06 00:02:28.395014 | orchestrator | 00:02:28.394 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-05-06 00:02:28.398709 | orchestrator | 00:02:28.398 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-06 00:02:28.400246 | orchestrator | 00:02:28.400 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 5s [id=c7515e22-63e8-4f77-b933-90ed57e1fe8c/d2c5f30c-7574-4db2-b6fd-52c11ffcec81] 2025-05-06 00:02:28.412466 | orchestrator | 00:02:28.412 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-06 00:02:28.419849 | orchestrator | 00:02:28.419 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=921ce6eb-4a0c-4718-90d6-9992a01e3dd1/bc0c56a8-1377-4a36-857b-86c78b746055] 2025-05-06 00:02:28.435876 | orchestrator | 00:02:28.435 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-05-06 00:02:28.467186 | orchestrator | 00:02:28.466 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=0d63c57e-2747-4c32-85ef-6e771a951de7/1c7d9a9a-015d-4c6e-aa25-f0276745bfc1] 2025-05-06 00:02:28.481922 | orchestrator | 00:02:28.481 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=c7515e22-63e8-4f77-b933-90ed57e1fe8c/b13833ce-dbae-48be-b135-3251cb983a77] 2025-05-06 00:02:28.483640 | orchestrator | 00:02:28.483 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-06 00:02:33.701029 | orchestrator | 00:02:33.700 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 6s [id=8db60e1a-352a-496a-b91e-56254c0e0268/cc7f276d-c2ba-4b91-9f6b-a505ec6ab98a] 2025-05-06 00:02:33.724846 | orchestrator | 00:02:33.724 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=9630d2f0-1b9c-4df8-a62e-d0d9514f7b23/f2e4c6c8-e338-4410-96b4-d1d5dab5be16] 2025-05-06 00:02:33.735558 | orchestrator | 00:02:33.735 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=f0c5b1cf-510f-496a-9f37-01eee102889a/1e73239c-12d8-4b54-bea1-88c93f0679a4] 2025-05-06 00:02:33.752592 | orchestrator | 00:02:33.752 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=8db60e1a-352a-496a-b91e-56254c0e0268/8c0721df-98b6-45a8-8372-f184b99eacbe] 2025-05-06 00:02:33.761265 | orchestrator | 00:02:33.760 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=f0c5b1cf-510f-496a-9f37-01eee102889a/7066bed1-b6f5-4fc6-91d4-16dfe41e1882] 2025-05-06 00:02:33.788195 | orchestrator | 00:02:33.787 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 6s [id=9630d2f0-1b9c-4df8-a62e-d0d9514f7b23/a5a4c6fa-807d-44c7-a556-c4522912d679] 2025-05-06 00:02:33.790392 | orchestrator | 00:02:33.790 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 6s [id=921ce6eb-4a0c-4718-90d6-9992a01e3dd1/c3e2c64f-9688-4cad-bb81-b3a7d150bd8b] 2025-05-06 00:02:33.808590 | orchestrator | 00:02:33.808 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 6s [id=9630d2f0-1b9c-4df8-a62e-d0d9514f7b23/9f4cae81-5600-43ad-ae81-4d2d3f64aa06] 2025-05-06 00:02:38.484779 | orchestrator | 00:02:38.484 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-06 00:02:48.485864 | orchestrator | 00:02:48.485 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-06 00:02:49.065765 | orchestrator | 00:02:49.065 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=8552b0c0-d724-40e0-84e5-8d6c660811ca] 2025-05-06 00:02:49.094159 | orchestrator | 00:02:49.093 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-05-06 00:02:49.094277 | orchestrator | 00:02:49.094 STDOUT terraform: Outputs: 2025-05-06 00:02:49.094299 | orchestrator | 00:02:49.094 STDOUT terraform: manager_address = 2025-05-06 00:02:49.094319 | orchestrator | 00:02:49.094 STDOUT terraform: private_key = 2025-05-06 00:02:59.697082 | orchestrator | changed 2025-05-06 00:02:59.739510 | 2025-05-06 00:02:59.739694 | TASK [Fetch manager address] 2025-05-06 00:03:00.200690 | orchestrator | ok 2025-05-06 00:03:00.211990 | 2025-05-06 00:03:00.212118 | TASK [Set manager_host address] 2025-05-06 00:03:00.327342 | orchestrator | ok 2025-05-06 00:03:00.338551 | 2025-05-06 00:03:00.338688 | LOOP [Update ansible collections] 2025-05-06 00:03:01.234302 | orchestrator | changed 2025-05-06 00:03:02.139997 | orchestrator | changed 2025-05-06 00:03:02.167482 | 2025-05-06 00:03:02.167740 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-06 00:03:12.763548 | orchestrator | ok 2025-05-06 00:03:12.776359 | 2025-05-06 00:03:12.776490 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-06 00:04:12.826503 | orchestrator | ok 2025-05-06 00:04:12.839007 | 2025-05-06 00:04:12.839142 | TASK [Fetch manager ssh hostkey] 2025-05-06 00:04:13.924303 | orchestrator | Output suppressed because no_log was given 2025-05-06 00:04:13.935602 | 2025-05-06 00:04:13.935723 | TASK [Get ssh keypair from terraform environment] 2025-05-06 00:04:14.521974 | orchestrator | changed 2025-05-06 00:04:14.540501 | 2025-05-06 00:04:14.540640 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-06 00:04:14.590256 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-06 00:04:14.600486 | 2025-05-06 00:04:14.600611 | TASK [Run manager part 0] 2025-05-06 00:04:15.475331 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-06 00:04:15.517774 | orchestrator | 2025-05-06 00:04:17.317050 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-06 00:04:17.317120 | orchestrator | 2025-05-06 00:04:17.317145 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-06 00:04:17.317163 | orchestrator | ok: [testbed-manager] 2025-05-06 00:04:19.212156 | orchestrator | 2025-05-06 00:04:19.212234 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-06 00:04:19.212249 | orchestrator | 2025-05-06 00:04:19.212256 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-06 00:04:19.212270 | orchestrator | ok: [testbed-manager] 2025-05-06 00:04:19.875425 | orchestrator | 2025-05-06 00:04:19.875490 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-06 00:04:19.875506 | orchestrator | ok: [testbed-manager] 2025-05-06 00:04:19.930600 | orchestrator | 2025-05-06 00:04:19.930646 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-06 00:04:19.930659 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:04:19.963879 | orchestrator | 2025-05-06 00:04:19.963983 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-06 00:04:19.964015 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:04:19.989940 | orchestrator | 2025-05-06 00:04:19.989977 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-06 00:04:19.989989 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:04:20.015193 | orchestrator | 2025-05-06 00:04:20.015252 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-06 00:04:20.015266 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:04:20.041379 | orchestrator | 2025-05-06 00:04:20.041414 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-06 00:04:20.041434 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:04:20.072606 | orchestrator | 2025-05-06 00:04:20.072647 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-06 00:04:20.072659 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:04:20.116082 | orchestrator | 2025-05-06 00:04:20.116124 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-06 00:04:20.116138 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:04:20.938377 | orchestrator | 2025-05-06 00:04:20.938459 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-06 00:04:20.938476 | orchestrator | changed: [testbed-manager] 2025-05-06 00:08:34.752545 | orchestrator | 2025-05-06 00:08:34.754162 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-06 00:08:34.754237 | orchestrator | changed: [testbed-manager] 2025-05-06 00:09:53.919333 | orchestrator | 2025-05-06 00:09:53.919446 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-06 00:09:53.919481 | orchestrator | changed: [testbed-manager] 2025-05-06 00:10:23.527897 | orchestrator | 2025-05-06 00:10:23.528020 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-06 00:10:23.528056 | orchestrator | changed: [testbed-manager] 2025-05-06 00:10:32.500453 | orchestrator | 2025-05-06 00:10:32.500570 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-06 00:10:32.500606 | orchestrator | changed: [testbed-manager] 2025-05-06 00:10:32.549985 | orchestrator | 2025-05-06 00:10:32.550116 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-06 00:10:32.550163 | orchestrator | ok: [testbed-manager] 2025-05-06 00:10:33.335066 | orchestrator | 2025-05-06 00:10:33.335172 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-06 00:10:33.335237 | orchestrator | ok: [testbed-manager] 2025-05-06 00:10:34.068920 | orchestrator | 2025-05-06 00:10:34.069032 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-06 00:10:34.069078 | orchestrator | changed: [testbed-manager] 2025-05-06 00:10:40.457072 | orchestrator | 2025-05-06 00:10:40.457178 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-06 00:10:40.457244 | orchestrator | changed: [testbed-manager] 2025-05-06 00:10:46.112693 | orchestrator | 2025-05-06 00:10:46.112803 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-06 00:10:46.112891 | orchestrator | changed: [testbed-manager] 2025-05-06 00:10:48.575030 | orchestrator | 2025-05-06 00:10:48.575073 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-06 00:10:48.575091 | orchestrator | changed: [testbed-manager] 2025-05-06 00:10:50.132683 | orchestrator | 2025-05-06 00:10:50.132727 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-06 00:10:50.132744 | orchestrator | changed: [testbed-manager] 2025-05-06 00:10:51.230634 | orchestrator | 2025-05-06 00:10:51.230761 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-06 00:10:51.230855 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-06 00:10:51.274307 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-06 00:10:51.274404 | orchestrator | 2025-05-06 00:10:51.274425 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-06 00:10:51.274456 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-06 00:10:54.467750 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-06 00:10:54.467863 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-06 00:10:54.467882 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-06 00:10:54.467914 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-06 00:10:55.034509 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-06 00:10:55.034564 | orchestrator | 2025-05-06 00:10:55.034575 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-06 00:10:55.034591 | orchestrator | changed: [testbed-manager] 2025-05-06 00:11:16.590876 | orchestrator | 2025-05-06 00:11:16.590996 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-06 00:11:16.591034 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-06 00:11:18.869066 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-06 00:11:18.869168 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-06 00:11:18.869187 | orchestrator | 2025-05-06 00:11:18.869205 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-06 00:11:18.869268 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-06 00:11:20.319627 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-06 00:11:20.319731 | orchestrator | 2025-05-06 00:11:20.319751 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-06 00:11:20.319767 | orchestrator | 2025-05-06 00:11:20.319782 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-06 00:11:20.319813 | orchestrator | ok: [testbed-manager] 2025-05-06 00:11:20.368223 | orchestrator | 2025-05-06 00:11:20.368371 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-06 00:11:20.368402 | orchestrator | ok: [testbed-manager] 2025-05-06 00:11:20.428433 | orchestrator | 2025-05-06 00:11:20.428516 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-06 00:11:20.428547 | orchestrator | ok: [testbed-manager] 2025-05-06 00:11:21.175951 | orchestrator | 2025-05-06 00:11:21.176051 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-06 00:11:21.176088 | orchestrator | changed: [testbed-manager] 2025-05-06 00:11:21.888467 | orchestrator | 2025-05-06 00:11:21.889368 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-06 00:11:21.889397 | orchestrator | changed: [testbed-manager] 2025-05-06 00:11:23.261654 | orchestrator | 2025-05-06 00:11:23.261747 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-06 00:11:23.261781 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-06 00:11:24.908625 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-06 00:11:24.908705 | orchestrator | 2025-05-06 00:11:24.908724 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-06 00:11:24.908753 | orchestrator | changed: [testbed-manager] 2025-05-06 00:11:26.717938 | orchestrator | 2025-05-06 00:11:26.717988 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-06 00:11:26.718006 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-06 00:11:27.344118 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-06 00:11:27.344221 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-06 00:11:27.344274 | orchestrator | 2025-05-06 00:11:27.344292 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-06 00:11:27.344323 | orchestrator | changed: [testbed-manager] 2025-05-06 00:11:27.411963 | orchestrator | 2025-05-06 00:11:27.412070 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-06 00:11:27.412115 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:11:28.286622 | orchestrator | 2025-05-06 00:11:28.286731 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-06 00:11:28.286767 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-06 00:11:28.325800 | orchestrator | changed: [testbed-manager] 2025-05-06 00:11:28.325886 | orchestrator | 2025-05-06 00:11:28.325902 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-06 00:11:28.325928 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:11:28.361883 | orchestrator | 2025-05-06 00:11:28.361968 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-06 00:11:28.361999 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:11:28.401613 | orchestrator | 2025-05-06 00:11:28.401689 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-06 00:11:28.401718 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:11:28.455859 | orchestrator | 2025-05-06 00:11:28.455950 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-06 00:11:28.455987 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:11:29.191596 | orchestrator | 2025-05-06 00:11:29.191699 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-06 00:11:29.191736 | orchestrator | ok: [testbed-manager] 2025-05-06 00:11:30.586524 | orchestrator | 2025-05-06 00:11:30.586694 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-06 00:11:30.586715 | orchestrator | 2025-05-06 00:11:30.586731 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-06 00:11:30.586760 | orchestrator | ok: [testbed-manager] 2025-05-06 00:11:31.571520 | orchestrator | 2025-05-06 00:11:31.571625 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-06 00:11:31.571660 | orchestrator | changed: [testbed-manager] 2025-05-06 00:11:31.671131 | orchestrator | 2025-05-06 00:11:31.671231 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:11:31.671278 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-06 00:11:31.671295 | orchestrator | 2025-05-06 00:11:31.994762 | orchestrator | changed 2025-05-06 00:11:32.015276 | 2025-05-06 00:11:32.015409 | TASK [Point out that the log in on the manager is now possible] 2025-05-06 00:11:32.062161 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-06 00:11:32.071722 | 2025-05-06 00:11:32.071829 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-06 00:11:32.117183 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-06 00:11:32.127784 | 2025-05-06 00:11:32.127901 | TASK [Run manager part 1 + 2] 2025-05-06 00:11:32.980549 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-06 00:11:33.035312 | orchestrator | 2025-05-06 00:11:35.609024 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-06 00:11:35.609091 | orchestrator | 2025-05-06 00:11:35.609115 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-06 00:11:35.609135 | orchestrator | ok: [testbed-manager] 2025-05-06 00:11:35.646842 | orchestrator | 2025-05-06 00:11:35.646926 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-06 00:11:35.646956 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:11:35.688712 | orchestrator | 2025-05-06 00:11:35.688776 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-06 00:11:35.688795 | orchestrator | ok: [testbed-manager] 2025-05-06 00:11:35.735026 | orchestrator | 2025-05-06 00:11:35.735092 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-06 00:11:35.735113 | orchestrator | ok: [testbed-manager] 2025-05-06 00:11:35.805223 | orchestrator | 2025-05-06 00:11:35.805311 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-06 00:11:35.805328 | orchestrator | ok: [testbed-manager] 2025-05-06 00:11:35.865599 | orchestrator | 2025-05-06 00:11:35.865669 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-06 00:11:35.865690 | orchestrator | ok: [testbed-manager] 2025-05-06 00:11:35.911793 | orchestrator | 2025-05-06 00:11:35.911847 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-06 00:11:35.911862 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-06 00:11:36.640516 | orchestrator | 2025-05-06 00:11:36.640589 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-06 00:11:36.640611 | orchestrator | ok: [testbed-manager] 2025-05-06 00:11:36.688358 | orchestrator | 2025-05-06 00:11:36.688422 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-06 00:11:36.688443 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:11:38.079373 | orchestrator | 2025-05-06 00:11:38.079450 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-06 00:11:38.079479 | orchestrator | changed: [testbed-manager] 2025-05-06 00:11:38.654321 | orchestrator | 2025-05-06 00:11:38.654391 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-06 00:11:38.654412 | orchestrator | ok: [testbed-manager] 2025-05-06 00:11:39.830970 | orchestrator | 2025-05-06 00:11:39.831053 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-06 00:11:39.831085 | orchestrator | changed: [testbed-manager] 2025-05-06 00:11:51.830714 | orchestrator | 2025-05-06 00:11:51.830816 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-06 00:11:51.830849 | orchestrator | changed: [testbed-manager] 2025-05-06 00:11:52.457923 | orchestrator | 2025-05-06 00:11:52.458007 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-06 00:11:52.458083 | orchestrator | ok: [testbed-manager] 2025-05-06 00:11:52.510121 | orchestrator | 2025-05-06 00:11:52.510182 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-06 00:11:52.510201 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:11:53.403731 | orchestrator | 2025-05-06 00:11:53.403790 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-06 00:11:53.403810 | orchestrator | changed: [testbed-manager] 2025-05-06 00:11:54.338289 | orchestrator | 2025-05-06 00:11:54.338352 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-06 00:11:54.338373 | orchestrator | changed: [testbed-manager] 2025-05-06 00:11:54.906263 | orchestrator | 2025-05-06 00:11:54.906398 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-06 00:11:54.906432 | orchestrator | changed: [testbed-manager] 2025-05-06 00:11:54.948264 | orchestrator | 2025-05-06 00:11:54.948388 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-06 00:11:54.948416 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-06 00:11:57.250959 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-06 00:11:57.251075 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-06 00:11:57.251096 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-06 00:11:57.251127 | orchestrator | changed: [testbed-manager] 2025-05-06 00:12:06.093761 | orchestrator | 2025-05-06 00:12:06.093928 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-06 00:12:06.093947 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-06 00:12:07.129590 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-06 00:12:07.129700 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-06 00:12:07.129719 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-06 00:12:07.129736 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-06 00:12:07.129751 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-06 00:12:07.129766 | orchestrator | 2025-05-06 00:12:07.129781 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-06 00:12:07.129825 | orchestrator | changed: [testbed-manager] 2025-05-06 00:12:07.170548 | orchestrator | 2025-05-06 00:12:07.170628 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-06 00:12:07.170658 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:12:10.181194 | orchestrator | 2025-05-06 00:12:10.181953 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-06 00:12:10.181996 | orchestrator | changed: [testbed-manager] 2025-05-06 00:12:10.224076 | orchestrator | 2025-05-06 00:12:10.224180 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-06 00:12:10.224214 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:13:41.974651 | orchestrator | 2025-05-06 00:13:41.974715 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-06 00:13:41.974731 | orchestrator | changed: [testbed-manager] 2025-05-06 00:13:43.073536 | orchestrator | 2025-05-06 00:13:43.073638 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-06 00:13:43.073672 | orchestrator | ok: [testbed-manager] 2025-05-06 00:13:43.169726 | orchestrator | 2025-05-06 00:13:43.169827 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:13:43.169849 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-06 00:13:43.169864 | orchestrator | 2025-05-06 00:13:43.265795 | orchestrator | changed 2025-05-06 00:13:43.287029 | 2025-05-06 00:13:43.287183 | TASK [Reboot manager] 2025-05-06 00:13:44.874047 | orchestrator | changed 2025-05-06 00:13:44.893235 | 2025-05-06 00:13:44.893427 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-06 00:13:58.491558 | orchestrator | ok 2025-05-06 00:13:58.504844 | 2025-05-06 00:13:58.504994 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-06 00:14:58.555735 | orchestrator | ok 2025-05-06 00:14:58.567014 | 2025-05-06 00:14:58.567140 | TASK [Deploy manager + bootstrap nodes] 2025-05-06 00:15:00.799987 | orchestrator | 2025-05-06 00:15:00.803201 | orchestrator | # DEPLOY MANAGER 2025-05-06 00:15:00.803260 | orchestrator | 2025-05-06 00:15:00.803279 | orchestrator | + set -e 2025-05-06 00:15:00.803324 | orchestrator | + echo 2025-05-06 00:15:00.803343 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-06 00:15:00.803360 | orchestrator | + echo 2025-05-06 00:15:00.803386 | orchestrator | + cat /opt/manager-vars.sh 2025-05-06 00:15:00.803422 | orchestrator | export NUMBER_OF_NODES=6 2025-05-06 00:15:00.803628 | orchestrator | 2025-05-06 00:15:00.803649 | orchestrator | export CEPH_VERSION=reef 2025-05-06 00:15:00.803664 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-06 00:15:00.803678 | orchestrator | export MANAGER_VERSION=8.1.0 2025-05-06 00:15:00.803693 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-06 00:15:00.803707 | orchestrator | 2025-05-06 00:15:00.803722 | orchestrator | export ARA=false 2025-05-06 00:15:00.803737 | orchestrator | export TEMPEST=false 2025-05-06 00:15:00.803751 | orchestrator | export IS_ZUUL=true 2025-05-06 00:15:00.803765 | orchestrator | 2025-05-06 00:15:00.803780 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.79 2025-05-06 00:15:00.803795 | orchestrator | export EXTERNAL_API=false 2025-05-06 00:15:00.803809 | orchestrator | 2025-05-06 00:15:00.803823 | orchestrator | export IMAGE_USER=ubuntu 2025-05-06 00:15:00.803837 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-06 00:15:00.803852 | orchestrator | 2025-05-06 00:15:00.803866 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-06 00:15:00.803885 | orchestrator | 2025-05-06 00:15:00.804710 | orchestrator | + echo 2025-05-06 00:15:00.804744 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-06 00:15:00.804765 | orchestrator | ++ export INTERACTIVE=false 2025-05-06 00:15:00.804936 | orchestrator | ++ INTERACTIVE=false 2025-05-06 00:15:00.804955 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-06 00:15:00.804978 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-06 00:15:00.804997 | orchestrator | + source /opt/manager-vars.sh 2025-05-06 00:15:00.805108 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-06 00:15:00.805136 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-06 00:15:00.805165 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-06 00:15:00.805189 | orchestrator | ++ CEPH_VERSION=reef 2025-05-06 00:15:00.805206 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-06 00:15:00.805220 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-06 00:15:00.805242 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-06 00:15:00.805256 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-06 00:15:00.805270 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-06 00:15:00.805284 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-06 00:15:00.805298 | orchestrator | ++ export ARA=false 2025-05-06 00:15:00.805311 | orchestrator | ++ ARA=false 2025-05-06 00:15:00.805326 | orchestrator | ++ export TEMPEST=false 2025-05-06 00:15:00.805340 | orchestrator | ++ TEMPEST=false 2025-05-06 00:15:00.805359 | orchestrator | ++ export IS_ZUUL=true 2025-05-06 00:15:00.859274 | orchestrator | ++ IS_ZUUL=true 2025-05-06 00:15:00.859389 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.79 2025-05-06 00:15:00.859407 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.79 2025-05-06 00:15:00.859512 | orchestrator | ++ export EXTERNAL_API=false 2025-05-06 00:15:00.859536 | orchestrator | ++ EXTERNAL_API=false 2025-05-06 00:15:00.859551 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-06 00:15:00.859565 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-06 00:15:00.859579 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-06 00:15:00.859593 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-06 00:15:00.859611 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-06 00:15:00.859626 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-06 00:15:00.859640 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-06 00:15:00.859683 | orchestrator | + docker version 2025-05-06 00:15:01.115714 | orchestrator | Client: Docker Engine - Community 2025-05-06 00:15:01.119246 | orchestrator | Version: 26.1.4 2025-05-06 00:15:01.167373 | orchestrator | API version: 1.45 2025-05-06 00:15:01.167473 | orchestrator | Go version: go1.21.11 2025-05-06 00:15:01.167490 | orchestrator | Git commit: 5650f9b 2025-05-06 00:15:01.167505 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-06 00:15:01.167521 | orchestrator | OS/Arch: linux/amd64 2025-05-06 00:15:01.167535 | orchestrator | Context: default 2025-05-06 00:15:01.167550 | orchestrator | 2025-05-06 00:15:01.167564 | orchestrator | Server: Docker Engine - Community 2025-05-06 00:15:01.167579 | orchestrator | Engine: 2025-05-06 00:15:01.167593 | orchestrator | Version: 26.1.4 2025-05-06 00:15:01.167607 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-05-06 00:15:01.167622 | orchestrator | Go version: go1.21.11 2025-05-06 00:15:01.167637 | orchestrator | Git commit: de5c9cf 2025-05-06 00:15:01.167682 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-06 00:15:01.167697 | orchestrator | OS/Arch: linux/amd64 2025-05-06 00:15:01.167711 | orchestrator | Experimental: false 2025-05-06 00:15:01.167726 | orchestrator | containerd: 2025-05-06 00:15:01.167739 | orchestrator | Version: 1.7.27 2025-05-06 00:15:01.167753 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-06 00:15:01.167768 | orchestrator | runc: 2025-05-06 00:15:01.167782 | orchestrator | Version: 1.2.5 2025-05-06 00:15:01.167797 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-06 00:15:01.167811 | orchestrator | docker-init: 2025-05-06 00:15:01.167825 | orchestrator | Version: 0.19.0 2025-05-06 00:15:01.167839 | orchestrator | GitCommit: de40ad0 2025-05-06 00:15:01.167886 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-06 00:15:02.188783 | orchestrator | + set -e 2025-05-06 00:15:02.188914 | orchestrator | + source /opt/manager-vars.sh 2025-05-06 00:15:02.188936 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-06 00:15:02.188952 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-06 00:15:02.188966 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-06 00:15:02.188981 | orchestrator | ++ CEPH_VERSION=reef 2025-05-06 00:15:02.188995 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-06 00:15:02.189011 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-06 00:15:02.189025 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-06 00:15:02.189040 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-06 00:15:02.189054 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-06 00:15:02.189068 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-06 00:15:02.189082 | orchestrator | ++ export ARA=false 2025-05-06 00:15:02.189096 | orchestrator | ++ ARA=false 2025-05-06 00:15:02.189110 | orchestrator | ++ export TEMPEST=false 2025-05-06 00:15:02.189124 | orchestrator | ++ TEMPEST=false 2025-05-06 00:15:02.189138 | orchestrator | ++ export IS_ZUUL=true 2025-05-06 00:15:02.189152 | orchestrator | ++ IS_ZUUL=true 2025-05-06 00:15:02.189166 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.79 2025-05-06 00:15:02.189181 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.79 2025-05-06 00:15:02.189195 | orchestrator | ++ export EXTERNAL_API=false 2025-05-06 00:15:02.189232 | orchestrator | ++ EXTERNAL_API=false 2025-05-06 00:15:02.189247 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-06 00:15:02.189262 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-06 00:15:02.189281 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-06 00:15:02.189301 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-06 00:15:02.189315 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-06 00:15:02.189329 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-06 00:15:02.189343 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-06 00:15:02.189357 | orchestrator | ++ export INTERACTIVE=false 2025-05-06 00:15:02.189371 | orchestrator | ++ INTERACTIVE=false 2025-05-06 00:15:02.189386 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-06 00:15:02.189400 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-06 00:15:02.189414 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-06 00:15:02.189458 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-05-06 00:15:02.189475 | orchestrator | + set -e 2025-05-06 00:15:02.189490 | orchestrator | + VERSION=8.1.0 2025-05-06 00:15:02.189506 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-05-06 00:15:02.189528 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-06 00:15:02.189543 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-06 00:15:02.189562 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-06 00:15:02.189577 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-05-06 00:15:02.189593 | orchestrator | /opt/configuration ~ 2025-05-06 00:15:02.189607 | orchestrator | + set -e 2025-05-06 00:15:02.189621 | orchestrator | + pushd /opt/configuration 2025-05-06 00:15:02.189635 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-06 00:15:02.189649 | orchestrator | + source /opt/venv/bin/activate 2025-05-06 00:15:02.189663 | orchestrator | ++ deactivate nondestructive 2025-05-06 00:15:02.189677 | orchestrator | ++ '[' -n '' ']' 2025-05-06 00:15:02.189691 | orchestrator | ++ '[' -n '' ']' 2025-05-06 00:15:02.189705 | orchestrator | ++ hash -r 2025-05-06 00:15:02.189720 | orchestrator | ++ '[' -n '' ']' 2025-05-06 00:15:02.189733 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-06 00:15:02.189748 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-06 00:15:02.189762 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-06 00:15:02.189776 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-06 00:15:02.189809 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-06 00:15:02.189824 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-06 00:15:02.189838 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-06 00:15:02.189852 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-06 00:15:02.189867 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-06 00:15:02.189881 | orchestrator | ++ export PATH 2025-05-06 00:15:02.189896 | orchestrator | ++ '[' -n '' ']' 2025-05-06 00:15:02.189909 | orchestrator | ++ '[' -z '' ']' 2025-05-06 00:15:02.189923 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-06 00:15:02.189937 | orchestrator | ++ PS1='(venv) ' 2025-05-06 00:15:02.189951 | orchestrator | ++ export PS1 2025-05-06 00:15:02.189966 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-06 00:15:02.189980 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-06 00:15:02.189994 | orchestrator | ++ hash -r 2025-05-06 00:15:02.190008 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-05-06 00:15:02.190100 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-05-06 00:15:02.190621 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-05-06 00:15:02.190648 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-05-06 00:15:02.191887 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-05-06 00:15:02.193013 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-05-06 00:15:02.202795 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.1.8) 2025-05-06 00:15:02.204143 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-05-06 00:15:02.205118 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-05-06 00:15:02.206311 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-05-06 00:15:02.235793 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-05-06 00:15:02.237218 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-05-06 00:15:02.238875 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-05-06 00:15:02.240036 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-05-06 00:15:02.244079 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-05-06 00:15:02.445484 | orchestrator | ++ which gilt 2025-05-06 00:15:02.449769 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-05-06 00:15:02.449808 | orchestrator | + /opt/venv/bin/gilt overlay 2025-05-06 00:15:02.670385 | orchestrator | osism.cfg-generics: 2025-05-06 00:15:04.192340 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-05-06 00:15:04.192561 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-05-06 00:15:05.136474 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-05-06 00:15:05.136625 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-05-06 00:15:05.136648 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-05-06 00:15:05.136685 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-05-06 00:15:05.147228 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-05-06 00:15:05.440715 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-05-06 00:15:05.491928 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-06 00:15:05.493753 | orchestrator | + deactivate 2025-05-06 00:15:05.493802 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-06 00:15:05.493820 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-06 00:15:05.493834 | orchestrator | + export PATH 2025-05-06 00:15:05.493849 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-06 00:15:05.493863 | orchestrator | + '[' -n '' ']' 2025-05-06 00:15:05.493878 | orchestrator | + hash -r 2025-05-06 00:15:05.493892 | orchestrator | + '[' -n '' ']' 2025-05-06 00:15:05.493906 | orchestrator | + unset VIRTUAL_ENV 2025-05-06 00:15:05.493920 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-06 00:15:05.493934 | orchestrator | ~ 2025-05-06 00:15:05.493949 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-06 00:15:05.493966 | orchestrator | + unset -f deactivate 2025-05-06 00:15:05.493981 | orchestrator | + popd 2025-05-06 00:15:05.494001 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-06 00:15:05.494459 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-06 00:15:05.494491 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-06 00:15:05.562160 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-06 00:15:05.609688 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-06 00:15:05.609792 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-06 00:15:05.609820 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-06 00:15:05.610515 | orchestrator | + source /opt/venv/bin/activate 2025-05-06 00:15:05.610527 | orchestrator | ++ deactivate nondestructive 2025-05-06 00:15:05.610538 | orchestrator | ++ '[' -n '' ']' 2025-05-06 00:15:05.610548 | orchestrator | ++ '[' -n '' ']' 2025-05-06 00:15:05.610557 | orchestrator | ++ hash -r 2025-05-06 00:15:05.610567 | orchestrator | ++ '[' -n '' ']' 2025-05-06 00:15:05.610576 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-06 00:15:05.610586 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-06 00:15:05.610595 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-06 00:15:05.610605 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-06 00:15:05.610614 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-06 00:15:05.610624 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-06 00:15:05.610634 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-06 00:15:05.610644 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-06 00:15:05.610654 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-06 00:15:05.610663 | orchestrator | ++ export PATH 2025-05-06 00:15:05.610672 | orchestrator | ++ '[' -n '' ']' 2025-05-06 00:15:05.610681 | orchestrator | ++ '[' -z '' ']' 2025-05-06 00:15:05.610692 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-06 00:15:05.610701 | orchestrator | ++ PS1='(venv) ' 2025-05-06 00:15:05.610711 | orchestrator | ++ export PS1 2025-05-06 00:15:05.610720 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-06 00:15:05.610728 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-06 00:15:05.610741 | orchestrator | ++ hash -r 2025-05-06 00:15:06.775615 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-06 00:15:06.775768 | orchestrator | 2025-05-06 00:15:07.357182 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-06 00:15:07.357315 | orchestrator | 2025-05-06 00:15:07.357334 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-06 00:15:07.357388 | orchestrator | ok: [testbed-manager] 2025-05-06 00:15:08.364504 | orchestrator | 2025-05-06 00:15:08.364638 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-06 00:15:08.364676 | orchestrator | changed: [testbed-manager] 2025-05-06 00:15:10.752025 | orchestrator | 2025-05-06 00:15:10.752164 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-06 00:15:10.752185 | orchestrator | 2025-05-06 00:15:10.752200 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-06 00:15:10.752243 | orchestrator | ok: [testbed-manager] 2025-05-06 00:15:16.035089 | orchestrator | 2025-05-06 00:15:16.035228 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-06 00:15:16.035300 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-06 00:16:31.780413 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.6.2) 2025-05-06 00:16:31.780616 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-05-06 00:16:31.780640 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-05-06 00:16:31.780656 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-05-06 00:16:31.780671 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.1-alpine) 2025-05-06 00:16:31.780686 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-05-06 00:16:31.780700 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-05-06 00:16:31.780715 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-05-06 00:16:31.780737 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.6-alpine) 2025-05-06 00:16:31.780753 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.2.1) 2025-05-06 00:16:31.780767 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.18.2) 2025-05-06 00:16:31.780781 | orchestrator | 2025-05-06 00:16:31.780796 | orchestrator | TASK [Check status] ************************************************************ 2025-05-06 00:16:31.780829 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-06 00:16:31.819831 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-06 00:16:31.819926 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-06 00:16:31.819943 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-06 00:16:31.819959 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j975929763501.1592', 'results_file': '/home/dragon/.ansible_async/j975929763501.1592', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-06 00:16:31.819994 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j458738146.1617', 'results_file': '/home/dragon/.ansible_async/j458738146.1617', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-05-06 00:16:31.820009 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-06 00:16:31.820023 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-06 00:16:31.820037 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j720436069565.1642', 'results_file': '/home/dragon/.ansible_async/j720436069565.1642', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-06 00:16:31.820058 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j921018927417.1674', 'results_file': '/home/dragon/.ansible_async/j921018927417.1674', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-06 00:16:31.820078 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j873688061538.1706', 'results_file': '/home/dragon/.ansible_async/j873688061538.1706', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-06 00:16:31.820093 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j639792261684.1738', 'results_file': '/home/dragon/.ansible_async/j639792261684.1738', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-05-06 00:16:31.820107 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-06 00:16:31.820121 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j214167978378.1770', 'results_file': '/home/dragon/.ansible_async/j214167978378.1770', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-05-06 00:16:31.820170 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j886761143047.1809', 'results_file': '/home/dragon/.ansible_async/j886761143047.1809', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-06 00:16:31.820185 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j77743826615.1836', 'results_file': '/home/dragon/.ansible_async/j77743826615.1836', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-05-06 00:16:31.820200 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j298618137655.1869', 'results_file': '/home/dragon/.ansible_async/j298618137655.1869', 'changed': True, 'item': 'index.docker.io/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-05-06 00:16:31.820214 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j738718670964.1907', 'results_file': '/home/dragon/.ansible_async/j738718670964.1907', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-05-06 00:16:31.820228 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j984841445166.1943', 'results_file': '/home/dragon/.ansible_async/j984841445166.1943', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-05-06 00:16:31.820242 | orchestrator | 2025-05-06 00:16:31.820258 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-06 00:16:31.820286 | orchestrator | ok: [testbed-manager] 2025-05-06 00:16:32.281006 | orchestrator | 2025-05-06 00:16:32.281143 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-06 00:16:32.281194 | orchestrator | changed: [testbed-manager] 2025-05-06 00:16:32.608194 | orchestrator | 2025-05-06 00:16:32.608344 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-06 00:16:32.608400 | orchestrator | changed: [testbed-manager] 2025-05-06 00:16:32.956175 | orchestrator | 2025-05-06 00:16:32.956302 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-06 00:16:32.956339 | orchestrator | changed: [testbed-manager] 2025-05-06 00:16:33.010845 | orchestrator | 2025-05-06 00:16:33.010969 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-06 00:16:33.011006 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:16:33.334082 | orchestrator | 2025-05-06 00:16:33.334202 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-06 00:16:33.334236 | orchestrator | ok: [testbed-manager] 2025-05-06 00:16:33.445247 | orchestrator | 2025-05-06 00:16:33.445371 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-06 00:16:33.445407 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:16:35.262697 | orchestrator | 2025-05-06 00:16:35.262827 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-06 00:16:35.262848 | orchestrator | 2025-05-06 00:16:35.262864 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-06 00:16:35.262895 | orchestrator | ok: [testbed-manager] 2025-05-06 00:16:35.359113 | orchestrator | 2025-05-06 00:16:35.359232 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-06 00:16:35.359265 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-06 00:16:35.413285 | orchestrator | 2025-05-06 00:16:35.413382 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-06 00:16:35.413413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-06 00:16:36.516360 | orchestrator | 2025-05-06 00:16:36.516594 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-06 00:16:36.516640 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-06 00:16:38.275151 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-06 00:16:38.275304 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-06 00:16:38.275337 | orchestrator | 2025-05-06 00:16:38.275354 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-06 00:16:38.275385 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-06 00:16:38.898854 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-06 00:16:38.898965 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-06 00:16:38.898984 | orchestrator | 2025-05-06 00:16:38.899000 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-06 00:16:38.899030 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-06 00:16:39.525692 | orchestrator | changed: [testbed-manager] 2025-05-06 00:16:39.525821 | orchestrator | 2025-05-06 00:16:39.525842 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-06 00:16:39.525875 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-06 00:16:39.590645 | orchestrator | changed: [testbed-manager] 2025-05-06 00:16:39.590750 | orchestrator | 2025-05-06 00:16:39.590768 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-06 00:16:39.590799 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:16:39.936082 | orchestrator | 2025-05-06 00:16:39.936208 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-06 00:16:39.936248 | orchestrator | ok: [testbed-manager] 2025-05-06 00:16:39.996981 | orchestrator | 2025-05-06 00:16:39.997070 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-06 00:16:39.997101 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-06 00:16:41.134359 | orchestrator | 2025-05-06 00:16:41.134482 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-06 00:16:41.134577 | orchestrator | changed: [testbed-manager] 2025-05-06 00:16:41.916315 | orchestrator | 2025-05-06 00:16:41.916440 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-06 00:16:41.916478 | orchestrator | changed: [testbed-manager] 2025-05-06 00:16:45.234181 | orchestrator | 2025-05-06 00:16:45.234281 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-06 00:16:45.234301 | orchestrator | changed: [testbed-manager] 2025-05-06 00:16:45.337718 | orchestrator | 2025-05-06 00:16:45.337826 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-06 00:16:45.337854 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-06 00:16:45.398443 | orchestrator | 2025-05-06 00:16:45.398570 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-06 00:16:45.398598 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-06 00:16:48.237300 | orchestrator | 2025-05-06 00:16:48.237423 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-06 00:16:48.237461 | orchestrator | ok: [testbed-manager] 2025-05-06 00:16:48.377851 | orchestrator | 2025-05-06 00:16:48.377968 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-06 00:16:48.378004 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-06 00:16:49.570150 | orchestrator | 2025-05-06 00:16:49.570274 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-06 00:16:49.570311 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-06 00:16:49.647267 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-06 00:16:49.647383 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-06 00:16:49.647401 | orchestrator | 2025-05-06 00:16:49.647416 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-06 00:16:49.647479 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-06 00:16:50.357377 | orchestrator | 2025-05-06 00:16:50.357488 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-06 00:16:50.357534 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-06 00:16:51.008430 | orchestrator | 2025-05-06 00:16:51.008603 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-06 00:16:51.008656 | orchestrator | changed: [testbed-manager] 2025-05-06 00:16:51.653405 | orchestrator | 2025-05-06 00:16:51.653574 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-06 00:16:51.653613 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-06 00:16:52.056173 | orchestrator | changed: [testbed-manager] 2025-05-06 00:16:52.056294 | orchestrator | 2025-05-06 00:16:52.056313 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-06 00:16:52.056344 | orchestrator | changed: [testbed-manager] 2025-05-06 00:16:52.414150 | orchestrator | 2025-05-06 00:16:52.414281 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-06 00:16:52.414320 | orchestrator | ok: [testbed-manager] 2025-05-06 00:16:52.470451 | orchestrator | 2025-05-06 00:16:52.470613 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-06 00:16:52.470647 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:16:53.096240 | orchestrator | 2025-05-06 00:16:53.096361 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-06 00:16:53.096398 | orchestrator | changed: [testbed-manager] 2025-05-06 00:16:53.187578 | orchestrator | 2025-05-06 00:16:53.187699 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-06 00:16:53.187734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-06 00:16:53.983087 | orchestrator | 2025-05-06 00:16:53.983232 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-06 00:16:53.983285 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-06 00:16:54.676031 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-06 00:16:54.676157 | orchestrator | 2025-05-06 00:16:54.676180 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-06 00:16:54.676211 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-06 00:16:55.354592 | orchestrator | 2025-05-06 00:16:55.354748 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-06 00:16:55.354807 | orchestrator | changed: [testbed-manager] 2025-05-06 00:16:55.408239 | orchestrator | 2025-05-06 00:16:55.408349 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-06 00:16:55.408375 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:16:56.067006 | orchestrator | 2025-05-06 00:16:56.067134 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-06 00:16:56.067169 | orchestrator | changed: [testbed-manager] 2025-05-06 00:16:57.910898 | orchestrator | 2025-05-06 00:16:57.911041 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-06 00:16:57.911079 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-06 00:17:03.868907 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-06 00:17:03.869024 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-06 00:17:03.869038 | orchestrator | changed: [testbed-manager] 2025-05-06 00:17:03.869048 | orchestrator | 2025-05-06 00:17:03.869058 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-06 00:17:03.869082 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-06 00:17:04.545092 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-06 00:17:04.545217 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-06 00:17:04.545238 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-06 00:17:04.545254 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-06 00:17:04.545270 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-06 00:17:04.545318 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-06 00:17:04.545333 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-06 00:17:04.545348 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-06 00:17:04.545363 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-06 00:17:04.545377 | orchestrator | 2025-05-06 00:17:04.545392 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-06 00:17:04.545424 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-06 00:17:04.637190 | orchestrator | 2025-05-06 00:17:04.637305 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-06 00:17:04.637340 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-06 00:17:05.364503 | orchestrator | 2025-05-06 00:17:05.364647 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-06 00:17:05.364684 | orchestrator | changed: [testbed-manager] 2025-05-06 00:17:06.017208 | orchestrator | 2025-05-06 00:17:06.017331 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-06 00:17:06.017367 | orchestrator | ok: [testbed-manager] 2025-05-06 00:17:06.754891 | orchestrator | 2025-05-06 00:17:06.755101 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-06 00:17:06.755144 | orchestrator | changed: [testbed-manager] 2025-05-06 00:17:12.475085 | orchestrator | 2025-05-06 00:17:12.475210 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-06 00:17:12.475242 | orchestrator | changed: [testbed-manager] 2025-05-06 00:17:13.478628 | orchestrator | 2025-05-06 00:17:13.478791 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-06 00:17:13.478852 | orchestrator | ok: [testbed-manager] 2025-05-06 00:17:35.729384 | orchestrator | 2025-05-06 00:17:35.729573 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-06 00:17:35.729617 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-06 00:17:35.791087 | orchestrator | ok: [testbed-manager] 2025-05-06 00:17:35.791187 | orchestrator | 2025-05-06 00:17:35.791206 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-06 00:17:35.791240 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:17:35.839227 | orchestrator | 2025-05-06 00:17:35.839336 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-06 00:17:35.839355 | orchestrator | 2025-05-06 00:17:35.839371 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-06 00:17:35.839400 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:17:35.915621 | orchestrator | 2025-05-06 00:17:35.915743 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-06 00:17:35.915777 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-06 00:17:36.731091 | orchestrator | 2025-05-06 00:17:36.731249 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-06 00:17:36.731302 | orchestrator | ok: [testbed-manager] 2025-05-06 00:17:36.806558 | orchestrator | 2025-05-06 00:17:36.806671 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-06 00:17:36.806706 | orchestrator | ok: [testbed-manager] 2025-05-06 00:17:36.862988 | orchestrator | 2025-05-06 00:17:36.863089 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-06 00:17:36.863122 | orchestrator | ok: [testbed-manager] => { 2025-05-06 00:17:37.514142 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-06 00:17:37.514271 | orchestrator | } 2025-05-06 00:17:37.514292 | orchestrator | 2025-05-06 00:17:37.514309 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-06 00:17:37.514341 | orchestrator | ok: [testbed-manager] 2025-05-06 00:17:38.410283 | orchestrator | 2025-05-06 00:17:38.410411 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-06 00:17:38.410485 | orchestrator | ok: [testbed-manager] 2025-05-06 00:17:38.480615 | orchestrator | 2025-05-06 00:17:38.480723 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-06 00:17:38.480757 | orchestrator | ok: [testbed-manager] 2025-05-06 00:17:38.516376 | orchestrator | 2025-05-06 00:17:38.516453 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-06 00:17:38.516496 | orchestrator | ok: [testbed-manager] => { 2025-05-06 00:17:38.570822 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-06 00:17:38.570895 | orchestrator | } 2025-05-06 00:17:38.570911 | orchestrator | 2025-05-06 00:17:38.570926 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-06 00:17:38.570952 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:17:38.617050 | orchestrator | 2025-05-06 00:17:38.617122 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-06 00:17:38.617151 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:17:38.666479 | orchestrator | 2025-05-06 00:17:38.666603 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-06 00:17:38.666631 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:17:38.727692 | orchestrator | 2025-05-06 00:17:38.727777 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-06 00:17:38.727805 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:17:38.780062 | orchestrator | 2025-05-06 00:17:38.780142 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-06 00:17:38.780171 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:17:38.842181 | orchestrator | 2025-05-06 00:17:38.842286 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-06 00:17:38.842321 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:17:40.103465 | orchestrator | 2025-05-06 00:17:40.103636 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-06 00:17:40.103711 | orchestrator | changed: [testbed-manager] 2025-05-06 00:17:40.185944 | orchestrator | 2025-05-06 00:17:40.186185 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-06 00:17:40.186237 | orchestrator | ok: [testbed-manager] 2025-05-06 00:18:40.253323 | orchestrator | 2025-05-06 00:18:40.253466 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-06 00:18:40.253505 | orchestrator | Pausing for 60 seconds 2025-05-06 00:18:40.310202 | orchestrator | changed: [testbed-manager] 2025-05-06 00:18:40.310323 | orchestrator | 2025-05-06 00:18:40.310344 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-06 00:18:40.310380 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-06 00:22:20.561355 | orchestrator | 2025-05-06 00:22:20.561495 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-06 00:22:20.561534 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-06 00:22:22.731680 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-06 00:22:22.731812 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-06 00:22:22.731832 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-06 00:22:22.731848 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-06 00:22:22.731862 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-06 00:22:22.731877 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-06 00:22:22.731890 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-06 00:22:22.731904 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-06 00:22:22.731918 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-06 00:22:22.731965 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-06 00:22:22.731980 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-06 00:22:22.731994 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-06 00:22:22.732008 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-06 00:22:22.732022 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-06 00:22:22.732036 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-06 00:22:22.732050 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-06 00:22:22.732064 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-06 00:22:22.732078 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-06 00:22:22.732105 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-05-06 00:22:22.732120 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-05-06 00:22:22.732134 | orchestrator | changed: [testbed-manager] 2025-05-06 00:22:22.732150 | orchestrator | 2025-05-06 00:22:22.732165 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-06 00:22:22.732179 | orchestrator | 2025-05-06 00:22:22.732193 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-06 00:22:22.732224 | orchestrator | ok: [testbed-manager] 2025-05-06 00:22:22.838153 | orchestrator | 2025-05-06 00:22:22.838285 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-06 00:22:22.838329 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-06 00:22:22.898246 | orchestrator | 2025-05-06 00:22:22.898359 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-06 00:22:22.898393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-06 00:22:24.846381 | orchestrator | 2025-05-06 00:22:24.846508 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-06 00:22:24.846544 | orchestrator | ok: [testbed-manager] 2025-05-06 00:22:24.905360 | orchestrator | 2025-05-06 00:22:24.905465 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-06 00:22:24.905498 | orchestrator | ok: [testbed-manager] 2025-05-06 00:22:25.010219 | orchestrator | 2025-05-06 00:22:25.010334 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-06 00:22:25.010369 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-06 00:22:27.829989 | orchestrator | 2025-05-06 00:22:27.830241 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-06 00:22:27.830284 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-06 00:22:28.473762 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-06 00:22:28.473891 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-06 00:22:28.473912 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-06 00:22:28.473927 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-06 00:22:28.473943 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-06 00:22:28.473957 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-06 00:22:28.473971 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-06 00:22:28.473985 | orchestrator | 2025-05-06 00:22:28.474000 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-06 00:22:28.474094 | orchestrator | changed: [testbed-manager] 2025-05-06 00:22:28.557476 | orchestrator | 2025-05-06 00:22:28.557555 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-06 00:22:28.557587 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-06 00:22:29.740884 | orchestrator | 2025-05-06 00:22:29.741014 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-06 00:22:29.741052 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-06 00:22:30.338826 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-06 00:22:30.338962 | orchestrator | 2025-05-06 00:22:30.338993 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-06 00:22:30.339040 | orchestrator | changed: [testbed-manager] 2025-05-06 00:22:30.402808 | orchestrator | 2025-05-06 00:22:30.402937 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-06 00:22:30.402990 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:22:30.469071 | orchestrator | 2025-05-06 00:22:30.469200 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-06 00:22:30.469229 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-06 00:22:31.840746 | orchestrator | 2025-05-06 00:22:31.840882 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-06 00:22:31.840921 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-06 00:22:32.464727 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-06 00:22:32.464857 | orchestrator | changed: [testbed-manager] 2025-05-06 00:22:32.464877 | orchestrator | 2025-05-06 00:22:32.464893 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-06 00:22:32.464925 | orchestrator | changed: [testbed-manager] 2025-05-06 00:22:32.572990 | orchestrator | 2025-05-06 00:22:32.573099 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-06 00:22:32.573147 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-06 00:22:33.224434 | orchestrator | 2025-05-06 00:22:33.224560 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-06 00:22:33.224595 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-06 00:22:33.859878 | orchestrator | changed: [testbed-manager] 2025-05-06 00:22:33.860006 | orchestrator | 2025-05-06 00:22:33.860028 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-06 00:22:33.860061 | orchestrator | changed: [testbed-manager] 2025-05-06 00:22:33.962119 | orchestrator | 2025-05-06 00:22:33.962231 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-06 00:22:33.962264 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-06 00:22:34.547720 | orchestrator | 2025-05-06 00:22:34.547840 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-06 00:22:34.547889 | orchestrator | changed: [testbed-manager] 2025-05-06 00:22:34.966409 | orchestrator | 2025-05-06 00:22:34.966522 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-06 00:22:34.966553 | orchestrator | changed: [testbed-manager] 2025-05-06 00:22:36.159931 | orchestrator | 2025-05-06 00:22:36.160136 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-06 00:22:36.160206 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-06 00:22:36.877416 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-06 00:22:36.877562 | orchestrator | 2025-05-06 00:22:36.877584 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-06 00:22:36.877664 | orchestrator | changed: [testbed-manager] 2025-05-06 00:22:37.270578 | orchestrator | 2025-05-06 00:22:37.270744 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-06 00:22:37.270780 | orchestrator | ok: [testbed-manager] 2025-05-06 00:22:37.634795 | orchestrator | 2025-05-06 00:22:37.634914 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-06 00:22:37.634986 | orchestrator | changed: [testbed-manager] 2025-05-06 00:22:37.689848 | orchestrator | 2025-05-06 00:22:37.689966 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-06 00:22:37.690010 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:22:37.772973 | orchestrator | 2025-05-06 00:22:37.773080 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-06 00:22:37.773114 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-06 00:22:37.822403 | orchestrator | 2025-05-06 00:22:37.822509 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-06 00:22:37.822541 | orchestrator | ok: [testbed-manager] 2025-05-06 00:22:39.821878 | orchestrator | 2025-05-06 00:22:39.822010 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-06 00:22:39.822146 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-06 00:22:40.541681 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-06 00:22:40.541800 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-06 00:22:40.541819 | orchestrator | 2025-05-06 00:22:40.541834 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-06 00:22:40.541864 | orchestrator | changed: [testbed-manager] 2025-05-06 00:22:41.240900 | orchestrator | 2025-05-06 00:22:41.241031 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-06 00:22:41.241067 | orchestrator | changed: [testbed-manager] 2025-05-06 00:22:41.945076 | orchestrator | 2025-05-06 00:22:41.945227 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-06 00:22:41.945265 | orchestrator | changed: [testbed-manager] 2025-05-06 00:22:42.013049 | orchestrator | 2025-05-06 00:22:42.013120 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-06 00:22:42.013152 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-06 00:22:42.057798 | orchestrator | 2025-05-06 00:22:42.057864 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-06 00:22:42.057894 | orchestrator | ok: [testbed-manager] 2025-05-06 00:22:42.756320 | orchestrator | 2025-05-06 00:22:42.756443 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-06 00:22:42.756480 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-06 00:22:42.833918 | orchestrator | 2025-05-06 00:22:42.834102 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-06 00:22:42.834148 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-06 00:22:43.533520 | orchestrator | 2025-05-06 00:22:43.533704 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-06 00:22:43.533741 | orchestrator | changed: [testbed-manager] 2025-05-06 00:22:44.139982 | orchestrator | 2025-05-06 00:22:44.140106 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-06 00:22:44.140142 | orchestrator | ok: [testbed-manager] 2025-05-06 00:22:44.202312 | orchestrator | 2025-05-06 00:22:44.202424 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-06 00:22:44.202456 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:22:44.255840 | orchestrator | 2025-05-06 00:22:44.255946 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-06 00:22:44.255978 | orchestrator | ok: [testbed-manager] 2025-05-06 00:22:45.072784 | orchestrator | 2025-05-06 00:22:45.072942 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-06 00:22:45.072983 | orchestrator | changed: [testbed-manager] 2025-05-06 00:23:26.780195 | orchestrator | 2025-05-06 00:23:26.780344 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-06 00:23:26.780381 | orchestrator | changed: [testbed-manager] 2025-05-06 00:23:27.453784 | orchestrator | 2025-05-06 00:23:27.453916 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-06 00:23:27.453955 | orchestrator | ok: [testbed-manager] 2025-05-06 00:23:30.159891 | orchestrator | 2025-05-06 00:23:30.160024 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-06 00:23:30.160061 | orchestrator | changed: [testbed-manager] 2025-05-06 00:23:30.219739 | orchestrator | 2025-05-06 00:23:30.219861 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-06 00:23:30.219897 | orchestrator | ok: [testbed-manager] 2025-05-06 00:23:30.264288 | orchestrator | 2025-05-06 00:23:30.264388 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-06 00:23:30.264406 | orchestrator | 2025-05-06 00:23:30.264421 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-06 00:23:30.264451 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:24:30.332070 | orchestrator | 2025-05-06 00:24:30.332206 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-06 00:24:30.332241 | orchestrator | Pausing for 60 seconds 2025-05-06 00:24:35.794139 | orchestrator | changed: [testbed-manager] 2025-05-06 00:24:35.794251 | orchestrator | 2025-05-06 00:24:35.794262 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-06 00:24:35.794284 | orchestrator | changed: [testbed-manager] 2025-05-06 00:25:17.468943 | orchestrator | 2025-05-06 00:25:17.469088 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-06 00:25:17.469127 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-06 00:25:22.956372 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-06 00:25:22.956509 | orchestrator | changed: [testbed-manager] 2025-05-06 00:25:22.956531 | orchestrator | 2025-05-06 00:25:22.956565 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-06 00:25:22.956673 | orchestrator | changed: [testbed-manager] 2025-05-06 00:25:23.055840 | orchestrator | 2025-05-06 00:25:23.055970 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-06 00:25:23.056023 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-06 00:25:23.119762 | orchestrator | 2025-05-06 00:25:23.119877 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-06 00:25:23.119895 | orchestrator | 2025-05-06 00:25:23.119910 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-06 00:25:23.119941 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:25:23.236077 | orchestrator | 2025-05-06 00:25:23.236189 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:25:23.236208 | orchestrator | testbed-manager : ok=109 changed=58 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-05-06 00:25:23.236223 | orchestrator | 2025-05-06 00:25:23.236255 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-06 00:25:23.244032 | orchestrator | + deactivate 2025-05-06 00:25:23.244082 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-06 00:25:23.244099 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-06 00:25:23.244113 | orchestrator | + export PATH 2025-05-06 00:25:23.244128 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-06 00:25:23.244143 | orchestrator | + '[' -n '' ']' 2025-05-06 00:25:23.244157 | orchestrator | + hash -r 2025-05-06 00:25:23.244171 | orchestrator | + '[' -n '' ']' 2025-05-06 00:25:23.244185 | orchestrator | + unset VIRTUAL_ENV 2025-05-06 00:25:23.244200 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-06 00:25:23.244223 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-06 00:25:23.244250 | orchestrator | + unset -f deactivate 2025-05-06 00:25:23.244279 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-06 00:25:23.244417 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-06 00:25:23.244594 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-06 00:25:23.244710 | orchestrator | + local max_attempts=60 2025-05-06 00:25:23.244731 | orchestrator | + local name=ceph-ansible 2025-05-06 00:25:23.244747 | orchestrator | + local attempt_num=1 2025-05-06 00:25:23.244777 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-06 00:25:23.267846 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-06 00:25:23.269136 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-06 00:25:23.269164 | orchestrator | + local max_attempts=60 2025-05-06 00:25:23.269179 | orchestrator | + local name=kolla-ansible 2025-05-06 00:25:23.269193 | orchestrator | + local attempt_num=1 2025-05-06 00:25:23.269212 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-06 00:25:23.299412 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-06 00:25:23.300807 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-06 00:25:23.300835 | orchestrator | + local max_attempts=60 2025-05-06 00:25:23.300850 | orchestrator | + local name=osism-ansible 2025-05-06 00:25:23.300864 | orchestrator | + local attempt_num=1 2025-05-06 00:25:23.300884 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-06 00:25:23.329297 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-06 00:25:24.041763 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-06 00:25:24.041885 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-06 00:25:24.041923 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-06 00:25:24.090332 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-06 00:25:24.293751 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-06 00:25:24.293870 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-06 00:25:24.293905 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-06 00:25:24.299353 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-05-06 00:25:24.299386 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-05-06 00:25:24.299400 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-05-06 00:25:24.299438 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-05-06 00:25:24.299454 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-05-06 00:25:24.299472 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-05-06 00:25:24.299487 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-05-06 00:25:24.299501 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 48 seconds (healthy) 2025-05-06 00:25:24.299515 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-05-06 00:25:24.299530 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-05-06 00:25:24.299543 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-05-06 00:25:24.299557 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-05-06 00:25:24.299626 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-05-06 00:25:24.299643 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-05-06 00:25:24.299657 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-05-06 00:25:24.299671 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-05-06 00:25:24.299685 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-05-06 00:25:24.299707 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-06 00:25:24.439875 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-06 00:25:24.447935 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 7 minutes (healthy) 2025-05-06 00:25:24.447987 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-05-06 00:25:24.448004 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 7 minutes (healthy) 5432/tcp 2025-05-06 00:25:24.448020 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.3-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 7 minutes (healthy) 6379/tcp 2025-05-06 00:25:24.448043 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-06 00:25:24.496524 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-06 00:25:24.500076 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-06 00:25:24.500119 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-06 00:25:26.042251 | orchestrator | 2025-05-06 00:25:26 | INFO  | Task f26ef258-259e-4407-94b4-fdae965f89ad (resolvconf) was prepared for execution. 2025-05-06 00:25:28.999640 | orchestrator | 2025-05-06 00:25:26 | INFO  | It takes a moment until task f26ef258-259e-4407-94b4-fdae965f89ad (resolvconf) has been started and output is visible here. 2025-05-06 00:25:28.999796 | orchestrator | 2025-05-06 00:25:29.000543 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-06 00:25:29.001348 | orchestrator | 2025-05-06 00:25:29.003014 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-06 00:25:29.003358 | orchestrator | Tuesday 06 May 2025 00:25:28 +0000 (0:00:00.082) 0:00:00.082 *********** 2025-05-06 00:25:32.963932 | orchestrator | ok: [testbed-manager] 2025-05-06 00:25:32.964698 | orchestrator | 2025-05-06 00:25:32.964744 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-06 00:25:32.964770 | orchestrator | Tuesday 06 May 2025 00:25:32 +0000 (0:00:03.966) 0:00:04.048 *********** 2025-05-06 00:25:33.030266 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:25:33.030820 | orchestrator | 2025-05-06 00:25:33.030857 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-06 00:25:33.031629 | orchestrator | Tuesday 06 May 2025 00:25:33 +0000 (0:00:00.066) 0:00:04.115 *********** 2025-05-06 00:25:33.122659 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-06 00:25:33.123546 | orchestrator | 2025-05-06 00:25:33.124622 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-06 00:25:33.125374 | orchestrator | Tuesday 06 May 2025 00:25:33 +0000 (0:00:00.091) 0:00:04.206 *********** 2025-05-06 00:25:33.194120 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-06 00:25:33.194733 | orchestrator | 2025-05-06 00:25:33.194938 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-06 00:25:33.196432 | orchestrator | Tuesday 06 May 2025 00:25:33 +0000 (0:00:00.073) 0:00:04.280 *********** 2025-05-06 00:25:34.268121 | orchestrator | ok: [testbed-manager] 2025-05-06 00:25:34.269299 | orchestrator | 2025-05-06 00:25:34.269876 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-06 00:25:34.271107 | orchestrator | Tuesday 06 May 2025 00:25:34 +0000 (0:00:01.071) 0:00:05.352 *********** 2025-05-06 00:25:34.320703 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:25:34.321010 | orchestrator | 2025-05-06 00:25:34.322903 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-06 00:25:34.323454 | orchestrator | Tuesday 06 May 2025 00:25:34 +0000 (0:00:00.054) 0:00:05.406 *********** 2025-05-06 00:25:34.810079 | orchestrator | ok: [testbed-manager] 2025-05-06 00:25:34.810507 | orchestrator | 2025-05-06 00:25:34.811295 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-06 00:25:34.812649 | orchestrator | Tuesday 06 May 2025 00:25:34 +0000 (0:00:00.488) 0:00:05.895 *********** 2025-05-06 00:25:34.894454 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:25:34.894751 | orchestrator | 2025-05-06 00:25:34.894786 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-06 00:25:34.895201 | orchestrator | Tuesday 06 May 2025 00:25:34 +0000 (0:00:00.083) 0:00:05.979 *********** 2025-05-06 00:25:35.448712 | orchestrator | changed: [testbed-manager] 2025-05-06 00:25:35.449426 | orchestrator | 2025-05-06 00:25:35.449730 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-06 00:25:35.450494 | orchestrator | Tuesday 06 May 2025 00:25:35 +0000 (0:00:00.554) 0:00:06.533 *********** 2025-05-06 00:25:36.530251 | orchestrator | changed: [testbed-manager] 2025-05-06 00:25:36.530464 | orchestrator | 2025-05-06 00:25:36.530492 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-06 00:25:36.530513 | orchestrator | Tuesday 06 May 2025 00:25:36 +0000 (0:00:01.079) 0:00:07.613 *********** 2025-05-06 00:25:37.490259 | orchestrator | ok: [testbed-manager] 2025-05-06 00:25:37.491077 | orchestrator | 2025-05-06 00:25:37.491135 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-06 00:25:37.491749 | orchestrator | Tuesday 06 May 2025 00:25:37 +0000 (0:00:00.961) 0:00:08.574 *********** 2025-05-06 00:25:37.552677 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-06 00:25:37.552930 | orchestrator | 2025-05-06 00:25:37.553267 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-06 00:25:37.553526 | orchestrator | Tuesday 06 May 2025 00:25:37 +0000 (0:00:00.064) 0:00:08.639 *********** 2025-05-06 00:25:38.672367 | orchestrator | changed: [testbed-manager] 2025-05-06 00:25:38.672640 | orchestrator | 2025-05-06 00:25:38.672689 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:25:38.673438 | orchestrator | 2025-05-06 00:25:38 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-06 00:25:38.673884 | orchestrator | 2025-05-06 00:25:38 | INFO  | Please wait and do not abort execution. 2025-05-06 00:25:38.673921 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-06 00:25:38.674537 | orchestrator | 2025-05-06 00:25:38.674602 | orchestrator | Tuesday 06 May 2025 00:25:38 +0000 (0:00:01.117) 0:00:09.757 *********** 2025-05-06 00:25:38.674961 | orchestrator | =============================================================================== 2025-05-06 00:25:38.675394 | orchestrator | Gathering Facts --------------------------------------------------------- 3.97s 2025-05-06 00:25:38.676163 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.12s 2025-05-06 00:25:38.676317 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.08s 2025-05-06 00:25:38.676347 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.07s 2025-05-06 00:25:38.676625 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.96s 2025-05-06 00:25:38.677108 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2025-05-06 00:25:38.677399 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2025-05-06 00:25:38.677760 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-05-06 00:25:38.678193 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-05-06 00:25:38.678635 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-05-06 00:25:38.678984 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-05-06 00:25:38.679318 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.06s 2025-05-06 00:25:38.679670 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2025-05-06 00:25:39.024094 | orchestrator | + osism apply sshconfig 2025-05-06 00:25:40.423414 | orchestrator | 2025-05-06 00:25:40 | INFO  | Task d1545aa6-88f1-41d6-ad7d-f38d36dd6c6c (sshconfig) was prepared for execution. 2025-05-06 00:25:43.308402 | orchestrator | 2025-05-06 00:25:40 | INFO  | It takes a moment until task d1545aa6-88f1-41d6-ad7d-f38d36dd6c6c (sshconfig) has been started and output is visible here. 2025-05-06 00:25:43.308558 | orchestrator | 2025-05-06 00:25:43.309060 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-06 00:25:43.309980 | orchestrator | 2025-05-06 00:25:43.310212 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-06 00:25:43.310896 | orchestrator | Tuesday 06 May 2025 00:25:43 +0000 (0:00:00.076) 0:00:00.076 *********** 2025-05-06 00:25:43.892112 | orchestrator | ok: [testbed-manager] 2025-05-06 00:25:43.892638 | orchestrator | 2025-05-06 00:25:43.893659 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-06 00:25:43.893960 | orchestrator | Tuesday 06 May 2025 00:25:43 +0000 (0:00:00.583) 0:00:00.660 *********** 2025-05-06 00:25:44.312667 | orchestrator | changed: [testbed-manager] 2025-05-06 00:25:44.313030 | orchestrator | 2025-05-06 00:25:44.313089 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-06 00:25:44.313607 | orchestrator | Tuesday 06 May 2025 00:25:44 +0000 (0:00:00.421) 0:00:01.082 *********** 2025-05-06 00:25:49.560165 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-06 00:25:49.562983 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-06 00:25:49.564263 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-06 00:25:49.564293 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-06 00:25:49.564312 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-06 00:25:49.564914 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-06 00:25:49.565313 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-06 00:25:49.565832 | orchestrator | 2025-05-06 00:25:49.566425 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-06 00:25:49.566827 | orchestrator | Tuesday 06 May 2025 00:25:49 +0000 (0:00:05.245) 0:00:06.327 *********** 2025-05-06 00:25:49.634691 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:25:49.634883 | orchestrator | 2025-05-06 00:25:49.636146 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-06 00:25:49.636905 | orchestrator | Tuesday 06 May 2025 00:25:49 +0000 (0:00:00.075) 0:00:06.403 *********** 2025-05-06 00:25:50.189659 | orchestrator | changed: [testbed-manager] 2025-05-06 00:25:50.190331 | orchestrator | 2025-05-06 00:25:50.190811 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:25:50.191496 | orchestrator | 2025-05-06 00:25:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-06 00:25:50.192085 | orchestrator | 2025-05-06 00:25:50 | INFO  | Please wait and do not abort execution. 2025-05-06 00:25:50.192127 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-06 00:25:50.192944 | orchestrator | 2025-05-06 00:25:50.193934 | orchestrator | Tuesday 06 May 2025 00:25:50 +0000 (0:00:00.555) 0:00:06.958 *********** 2025-05-06 00:25:50.194711 | orchestrator | =============================================================================== 2025-05-06 00:25:50.195230 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.25s 2025-05-06 00:25:50.195626 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.58s 2025-05-06 00:25:50.196044 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.56s 2025-05-06 00:25:50.196458 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.42s 2025-05-06 00:25:50.197032 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-05-06 00:25:50.569030 | orchestrator | + osism apply known-hosts 2025-05-06 00:25:51.942822 | orchestrator | 2025-05-06 00:25:51 | INFO  | Task ab48fb80-1077-4ffc-942d-197f0ba8a1a4 (known-hosts) was prepared for execution. 2025-05-06 00:25:54.868066 | orchestrator | 2025-05-06 00:25:51 | INFO  | It takes a moment until task ab48fb80-1077-4ffc-942d-197f0ba8a1a4 (known-hosts) has been started and output is visible here. 2025-05-06 00:25:54.868222 | orchestrator | 2025-05-06 00:25:54.870144 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-06 00:25:54.871227 | orchestrator | 2025-05-06 00:25:54.871454 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-06 00:25:54.871490 | orchestrator | Tuesday 06 May 2025 00:25:54 +0000 (0:00:00.104) 0:00:00.104 *********** 2025-05-06 00:26:00.826324 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-06 00:26:00.826710 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-06 00:26:00.827418 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-06 00:26:00.828307 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-06 00:26:00.828989 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-06 00:26:00.830302 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-06 00:26:00.830821 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-06 00:26:00.831523 | orchestrator | 2025-05-06 00:26:00.831772 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-06 00:26:00.832499 | orchestrator | Tuesday 06 May 2025 00:26:00 +0000 (0:00:05.957) 0:00:06.061 *********** 2025-05-06 00:26:00.990754 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-06 00:26:00.991103 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-06 00:26:00.991137 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-06 00:26:00.991191 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-06 00:26:00.991824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-06 00:26:00.992313 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-06 00:26:00.992763 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-06 00:26:00.993460 | orchestrator | 2025-05-06 00:26:00.993723 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-06 00:26:00.994729 | orchestrator | Tuesday 06 May 2025 00:26:00 +0000 (0:00:00.165) 0:00:06.227 *********** 2025-05-06 00:26:02.101718 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFMDWFEmpOxwQkVqQBTKHLNJBMgVUR1+YsysVT/qKTyY) 2025-05-06 00:26:02.103069 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC51TZ4BofYJnCOA1tvboA2mNzmYxL8o8Defg6rg2z3za2IuJLRtUoQlla5ZsGKat8yhqjmlrZRRdJXNr7ZZxvfQfwHuXKpRAzgGITeSqF/MUCSYkS1xubrU1FlS5zbeOkmwFOji/NyZKLiluMJuXlzElxeBWD3g0uCcLR2jfBpTR+CSunFNijTHzEFF2iMsMM2uPwl31RVM8WH2C4cldIn/XsFsnvcyC1Eu46y9pBmHXm9TvD0xicyYO2d5aQdmyP7+in52uEN079xhIgFmXzaqf8DC3pT5ScE8OLTHgPIqv67A3CIYrjBFMX2J4eXX8lIHsKmsioduaD1JvYy3S4BoViUDTZ+G2umgMKRhF4UXN8xvSsapPSWZpM5J4Cs3aAf1vwQ+aS0z4yG17QYlBcAnz5vWm+gPmWgx7Lys7zZ7k/R8JK7Ua51xdBttvU9o5o5eJ7YGhAIV/WdiVOCO7b+BUSe9MnOyli6ShBNr/DZr3vkouJKNHR6xO29xwpabDc=) 2025-05-06 00:26:02.103420 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGhWx/jRNZRcWcLRw/a4j7sLctacXVBjPkJZsNeWFmRkZ2wUJ16QKbFoxzTwuRTMNUzv4oR19Ln3n7IG902KE/k=) 2025-05-06 00:26:02.104271 | orchestrator | 2025-05-06 00:26:02.104529 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-06 00:26:02.107642 | orchestrator | Tuesday 06 May 2025 00:26:02 +0000 (0:00:01.112) 0:00:07.340 *********** 2025-05-06 00:26:03.122156 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEQefKsrm3Vfi+PmK99ZGG8Kt5lzB96VHJqOiBuHaKaU) 2025-05-06 00:26:03.122415 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDvnfbjjPfvdZxi3TTMugLt3NxrMM9V8SJX4r1jOkmvSKAR5iv0DmxkmgvsrgOeyGmengujUn1ip+D7XJ/RA/Zu4YUyBB/6pQNE1TBvwCEIw/mUB6O5ppiQWa6HjhOfZ31VgoKxw46Mb6bwleWn00c1T0rKFG8w9+JamEMhhDQ6aDzg6vIJlcqC1I2VVqwX//bA2nM4vjRvCo0NFLh5bMEVC8GYS/Zmij/5oI9iVhhvUkY1TCJcpLBaA6Hu8C01zF1kpjJPIC7Gk8PLKEEQUKIqDs71zTRSFV8OKWVFHGthZfpyLw4EpUodH1bfLerCaWxsljkAFRoU28ciHMCWIGh2yDuyf0z6EJIS/8XAaKzfbm7QZOo5a9L7Oxy5AM0qK8w3AIRAaglQtQZbae4JieuknQ39sUu6VMsxFNCUJ2pDVspUieKjoLYoD3aa5dGBxoFP8gWPJ4ShjBzH3nUq+9i3d/Mdes8YWoIIilbHDRNPUnykraaEHeVwXIIawHxDnU=) 2025-05-06 00:26:03.122830 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB9Agx3PAizFN7VnirkbMqSjda7BPpdlAYU/5AOCEDyXs1dXmxSKKVhRzC0eNse7+O/pnum+VLovD+ntuFyLj2k=) 2025-05-06 00:26:03.123779 | orchestrator | 2025-05-06 00:26:03.124388 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-06 00:26:03.124975 | orchestrator | Tuesday 06 May 2025 00:26:03 +0000 (0:00:01.020) 0:00:08.360 *********** 2025-05-06 00:26:04.120460 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMt6n0q/bzdHCxp9yHEFeJWa5LPMlmPZ9NsRw9IY7in8) 2025-05-06 00:26:04.121263 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2YVpwixXxJYT8AFwlQtPaKrONiLvDdK7kXbyGWgfOyX8FHRig8We5TrDNoTdUXS5EnUiX6t91znMZR244Hw/qH+tYBzXF1uc3PfMVBnZhtNVHqgHOC9wvdzmX3LaMdNhzaag0UGjlkd0+iYVX2X9QQa1UpGc7ASOMXocXhVxXqk8SZNRMn1EZPSoTDiq8i7WvNRHpfs/jRdlMwpmnUDivhr4IFYqlThoK/QG6JEWGQad+VZ4U/efw9WVSuYB8uRAV4oV/AOwiyVdrgvCZdbtd+g1Vylcj1Rv/4Rp5Mr0scJXIw9H5akw1fUGb1ddUZYW6ivX/jIyHyxxjPLyJZv1u03XMVyPxMA8es8pzuswHPUKTvVRjJS2xdrdbz4/XfskTQHGzxAk4xl7GAA7wIqcWB51DE+wDNz3dnnC7P3g+20QTJYRVr6pGaDyQl+SVObhZcX0dsXrYM/kU7epp+/vb7/FV6/hqJl5bhBtAlndUunxwFbivlHM0DQwuEInWcVM=) 2025-05-06 00:26:04.121364 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHUAn+6a8xAj6j0jUUYzB3M49hzftvYj0CgieU3c/ybDPxrCuawr+OHAOcyq1jWn0Jxdrff6zKCrwHNZBdc6P8U=) 2025-05-06 00:26:04.121942 | orchestrator | 2025-05-06 00:26:04.122591 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-06 00:26:04.123343 | orchestrator | Tuesday 06 May 2025 00:26:04 +0000 (0:00:00.996) 0:00:09.357 *********** 2025-05-06 00:26:05.182836 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEOO5oqMkI8263fkPmA73E4pOP9VuCwcHLj/8dbgLpEee/txY5BHBeezJCX7WIrWdJcS0nub49lAxClDG+DhuPI=) 2025-05-06 00:26:05.185251 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCg+F/9qMng61JKYO/M9b1dg7fmbcFWOqcz7PCKwaY+9mdoxqquSje4pEWRHwlLc5CDSmTLJtpQCBV7bWJ6FXsSRMbsqwq+yqZNbjWJXTbS5Gcq+sDKtrpLZA37CuDHjORC997REk/bwjVYOycfvp/iQ2nGvxvKaOGwngR01JDDMxnZhBNZdduc9jqXdpwfdvKJWq66jEHZ/g9EXbihl5+1Vd472q7UeKmkGSCTrIxdmn/7/pWVaWPjK7fMcqtMDHiGXvwMmSJYFwujd5jjcWq5EvWERN9ZUSAQekyysz9xDaHwz7tmqv+7H069Mjhj0mQTKJ4OY70bXW80kQmroiaRVlmhgYVD3Y52xxHMGOWqq2dNnEyk0eoIQTOeA/Ngpm7+o4mQSOy53NDgDM/f+D3lTz2Svb8BAXZ00QKlcdbewcCQR2Kr2pFznvvT0e++6c3IdVQHVDvAVdgrIJNW5TTyc6gIbOS2SW7I1uS30ozTbErnzoFmUj7LVLf70QdWM90=) 2025-05-06 00:26:05.185534 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPZ6Uri4dF1SkWaj7mu531P1rQ5ZGQMyjdRObiJpnUl5) 2025-05-06 00:26:05.185595 | orchestrator | 2025-05-06 00:26:05.186112 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-06 00:26:05.187311 | orchestrator | Tuesday 06 May 2025 00:26:05 +0000 (0:00:01.062) 0:00:10.420 *********** 2025-05-06 00:26:06.224075 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCv33aJDgY/NwpJL1OvnGatJ+Kl7mVYn5OAxQCC+5D1HCQGrKA0Z/LFUYaK02ufKx1kHjaYlnlYumJ3qHXSeJoRCJ75U5I10boYD2ugMcbineO1bNDpXTYj8ye91lFzckOZdCqpKBS51dnqfRmFTV0qdSw6rhkmx8zEe+0GGJGl43E0AS4Xa49kUqHlOqzp/3ZD5KN0O0s29hnHH2nToxDNVqf/ufVnZDSkGa679bD3yeDEbjJ2HsWrB6TsIEIMJTjyIZud62kr/fEOCAEHR04EIZXy47DEqVEH/MLAxlRv5hYz2ShR7uCfMNOCanJ+CqbOLc734y6q1g4XaN6INZseCdALAAQPj4lPzkeqx1hvK0xmnlX5rsXz0oN0OFmywW0lR1iv2q4bgupik9sG7lqubExiOcaOVWb/lv7O9qKxbLkWb9RUHta1EV+3ZuBG2k+A3M8ogAfrBtv39LWT/rUcsgL3gr1Y/ElDFbG3oDP7Oel5lTqI3V7Y9Wno2KnmKIU=) 2025-05-06 00:26:06.226264 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJL6ZD20vA6thEFfXg69wDDatA0oJBzuUE9jfU6L+ZpdVFi/Njt918J07IkFkOnOqkz/0ByrQpF3xaG2D0UhghQ=) 2025-05-06 00:26:06.226524 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFIpWxgMg5A8xGMLhFJTYq3Ux9R/et5Q2E8yQLo/BAW5) 2025-05-06 00:26:06.226599 | orchestrator | 2025-05-06 00:26:06.226645 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-06 00:26:06.226971 | orchestrator | Tuesday 06 May 2025 00:26:06 +0000 (0:00:01.041) 0:00:11.461 *********** 2025-05-06 00:26:07.309336 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbcrgKHJ+XD8xTCMpktFKTJX4nQ11iO3o6+qsg+EufeRWZxUn5INzfcHe/0xi1Tb1eZnpGjqs3ITXhx7XpWSYrEq5fUQRuH47a2IP4v5xSlRGacaVHauxSii4l1rZ6+7E3yy5LzT+nEg1wUmz5HMip+f82Qe/MRYZ6ULIq3HHPwM/DR5Sfa6jKuXVuHpkSpYmyjBe4OlZf9OnuylcShauXn4mY0hXvOBUyXSv1WdlHQv1av+rELeuA8+i+Zzka7wnlRK1I932mQ+A7Y21ITYO3ffBWQJ8HU/loVC+85SS9DhoCjEBm3MGmoHdRpAQA1dpp2VyCm+nhfZq82+rpAkiqWrCcyOnuSEH7QUkd6qKopB10WzfhUwq5QVOyh6gyQDH4GocRr2/vZeuTOD6BLCY7vl1hgp3QnGkwori2rSpEogKVjG76K8GYL3AzFE78T6WzbDQDPcYMO8/s1RuQH8SE17EjEPS+GD0Xld+DXl9rt14GlJHMuX7QxFyg04/JJIM=) 2025-05-06 00:26:07.310170 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIbIqtPxqeU9cxDOg9lCCMs2VgZIu6t0uRs52ycICBqJ) 2025-05-06 00:26:07.310883 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC5YiVBPTy4bu0gGLFRGzogx3XOwseDmXVZVmba+rHsUsKy2/FTUwuxToKJeYDFeZMK/mp1V9uYjOoq8jIjK/9M=) 2025-05-06 00:26:07.311185 | orchestrator | 2025-05-06 00:26:07.311500 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-06 00:26:07.312041 | orchestrator | Tuesday 06 May 2025 00:26:07 +0000 (0:00:01.083) 0:00:12.545 *********** 2025-05-06 00:26:08.369716 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNYxiwgU1TNQWJ7yik70L9u/5gcl6t9Zeq56T/hyftYxEgBw+z5BB+0W5vo5TudgPrJmOwbSPReAHMADdMzLYXA=) 2025-05-06 00:26:08.370252 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjP2BTQzBiJ7c5SwPznnAybktojYrbL/aI2NacDIZriwXSPdW22xIZdLZ4YhX+XarxXTejhmooe0MRR03TqnYLEOYuZwd1+nE7a1Whaw+X2g/Naq5qH83K0amPjUrGB/cU4fqpOi2efjIVKeqbqcqjL/BKw0YeTmNGqdx1QTmjm9WbN6hx1WuEu8JukvQ+rjBapIA5SYcbAEiJ9XTssbElc9Fq0MIn56Z06q4Ev1RZddKOmdPllBSNm9JUJukrD8IaLv2dE046xjubDNtYP7aS+Z+G41glEAWdAS8OVJ2VYPIXh8ZRr0SAk9VZunafeY6E6lhpzcjvGTROhnFsCg9vMY362ijcehVwLU3fB/fyBIg+qQe77RLi/kmXxjYARAENYGaaxXLedp8kb8/9b3AM2NHPv427EwsiKeJSTklvl7x1a/QJjtKJEeuq9Vly0/Cje/iuTbGCxAqE6uoMFPO298Gr8VPyfS90D8QYOPpdlpMmW0kXPjPTtFAv5D1mxj8=) 2025-05-06 00:26:08.370330 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOonGX/gNB1X1MQhbzWvNfUtUhMwxgm+hT2HXRK5zUk/) 2025-05-06 00:26:08.370843 | orchestrator | 2025-05-06 00:26:08.371905 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-06 00:26:13.626633 | orchestrator | Tuesday 06 May 2025 00:26:08 +0000 (0:00:01.059) 0:00:13.604 *********** 2025-05-06 00:26:13.626908 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-06 00:26:13.627403 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-06 00:26:13.627439 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-06 00:26:13.628115 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-06 00:26:13.628512 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-06 00:26:13.631898 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-06 00:26:13.632414 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-06 00:26:13.632610 | orchestrator | 2025-05-06 00:26:13.633059 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-06 00:26:13.633448 | orchestrator | Tuesday 06 May 2025 00:26:13 +0000 (0:00:05.258) 0:00:18.863 *********** 2025-05-06 00:26:13.797914 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-06 00:26:13.798759 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-06 00:26:13.799370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-06 00:26:13.800585 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-06 00:26:13.801364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-06 00:26:13.801454 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-06 00:26:13.802255 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-06 00:26:13.803335 | orchestrator | 2025-05-06 00:26:13.804167 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-06 00:26:13.805101 | orchestrator | Tuesday 06 May 2025 00:26:13 +0000 (0:00:00.173) 0:00:19.036 *********** 2025-05-06 00:26:14.835857 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGhWx/jRNZRcWcLRw/a4j7sLctacXVBjPkJZsNeWFmRkZ2wUJ16QKbFoxzTwuRTMNUzv4oR19Ln3n7IG902KE/k=) 2025-05-06 00:26:14.837157 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC51TZ4BofYJnCOA1tvboA2mNzmYxL8o8Defg6rg2z3za2IuJLRtUoQlla5ZsGKat8yhqjmlrZRRdJXNr7ZZxvfQfwHuXKpRAzgGITeSqF/MUCSYkS1xubrU1FlS5zbeOkmwFOji/NyZKLiluMJuXlzElxeBWD3g0uCcLR2jfBpTR+CSunFNijTHzEFF2iMsMM2uPwl31RVM8WH2C4cldIn/XsFsnvcyC1Eu46y9pBmHXm9TvD0xicyYO2d5aQdmyP7+in52uEN079xhIgFmXzaqf8DC3pT5ScE8OLTHgPIqv67A3CIYrjBFMX2J4eXX8lIHsKmsioduaD1JvYy3S4BoViUDTZ+G2umgMKRhF4UXN8xvSsapPSWZpM5J4Cs3aAf1vwQ+aS0z4yG17QYlBcAnz5vWm+gPmWgx7Lys7zZ7k/R8JK7Ua51xdBttvU9o5o5eJ7YGhAIV/WdiVOCO7b+BUSe9MnOyli6ShBNr/DZr3vkouJKNHR6xO29xwpabDc=) 2025-05-06 00:26:14.838147 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFMDWFEmpOxwQkVqQBTKHLNJBMgVUR1+YsysVT/qKTyY) 2025-05-06 00:26:14.838911 | orchestrator | 2025-05-06 00:26:14.839654 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-06 00:26:14.840285 | orchestrator | Tuesday 06 May 2025 00:26:14 +0000 (0:00:01.036) 0:00:20.073 *********** 2025-05-06 00:26:15.877021 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDvnfbjjPfvdZxi3TTMugLt3NxrMM9V8SJX4r1jOkmvSKAR5iv0DmxkmgvsrgOeyGmengujUn1ip+D7XJ/RA/Zu4YUyBB/6pQNE1TBvwCEIw/mUB6O5ppiQWa6HjhOfZ31VgoKxw46Mb6bwleWn00c1T0rKFG8w9+JamEMhhDQ6aDzg6vIJlcqC1I2VVqwX//bA2nM4vjRvCo0NFLh5bMEVC8GYS/Zmij/5oI9iVhhvUkY1TCJcpLBaA6Hu8C01zF1kpjJPIC7Gk8PLKEEQUKIqDs71zTRSFV8OKWVFHGthZfpyLw4EpUodH1bfLerCaWxsljkAFRoU28ciHMCWIGh2yDuyf0z6EJIS/8XAaKzfbm7QZOo5a9L7Oxy5AM0qK8w3AIRAaglQtQZbae4JieuknQ39sUu6VMsxFNCUJ2pDVspUieKjoLYoD3aa5dGBxoFP8gWPJ4ShjBzH3nUq+9i3d/Mdes8YWoIIilbHDRNPUnykraaEHeVwXIIawHxDnU=) 2025-05-06 00:26:15.877676 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB9Agx3PAizFN7VnirkbMqSjda7BPpdlAYU/5AOCEDyXs1dXmxSKKVhRzC0eNse7+O/pnum+VLovD+ntuFyLj2k=) 2025-05-06 00:26:15.878246 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEQefKsrm3Vfi+PmK99ZGG8Kt5lzB96VHJqOiBuHaKaU) 2025-05-06 00:26:15.879251 | orchestrator | 2025-05-06 00:26:15.880742 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-06 00:26:15.880992 | orchestrator | Tuesday 06 May 2025 00:26:15 +0000 (0:00:01.041) 0:00:21.114 *********** 2025-05-06 00:26:16.918342 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMt6n0q/bzdHCxp9yHEFeJWa5LPMlmPZ9NsRw9IY7in8) 2025-05-06 00:26:16.918712 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2YVpwixXxJYT8AFwlQtPaKrONiLvDdK7kXbyGWgfOyX8FHRig8We5TrDNoTdUXS5EnUiX6t91znMZR244Hw/qH+tYBzXF1uc3PfMVBnZhtNVHqgHOC9wvdzmX3LaMdNhzaag0UGjlkd0+iYVX2X9QQa1UpGc7ASOMXocXhVxXqk8SZNRMn1EZPSoTDiq8i7WvNRHpfs/jRdlMwpmnUDivhr4IFYqlThoK/QG6JEWGQad+VZ4U/efw9WVSuYB8uRAV4oV/AOwiyVdrgvCZdbtd+g1Vylcj1Rv/4Rp5Mr0scJXIw9H5akw1fUGb1ddUZYW6ivX/jIyHyxxjPLyJZv1u03XMVyPxMA8es8pzuswHPUKTvVRjJS2xdrdbz4/XfskTQHGzxAk4xl7GAA7wIqcWB51DE+wDNz3dnnC7P3g+20QTJYRVr6pGaDyQl+SVObhZcX0dsXrYM/kU7epp+/vb7/FV6/hqJl5bhBtAlndUunxwFbivlHM0DQwuEInWcVM=) 2025-05-06 00:26:16.919674 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHUAn+6a8xAj6j0jUUYzB3M49hzftvYj0CgieU3c/ybDPxrCuawr+OHAOcyq1jWn0Jxdrff6zKCrwHNZBdc6P8U=) 2025-05-06 00:26:16.920531 | orchestrator | 2025-05-06 00:26:16.921447 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-06 00:26:16.922136 | orchestrator | Tuesday 06 May 2025 00:26:16 +0000 (0:00:01.040) 0:00:22.155 *********** 2025-05-06 00:26:18.000659 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEOO5oqMkI8263fkPmA73E4pOP9VuCwcHLj/8dbgLpEee/txY5BHBeezJCX7WIrWdJcS0nub49lAxClDG+DhuPI=) 2025-05-06 00:26:18.000886 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPZ6Uri4dF1SkWaj7mu531P1rQ5ZGQMyjdRObiJpnUl5) 2025-05-06 00:26:18.002194 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCg+F/9qMng61JKYO/M9b1dg7fmbcFWOqcz7PCKwaY+9mdoxqquSje4pEWRHwlLc5CDSmTLJtpQCBV7bWJ6FXsSRMbsqwq+yqZNbjWJXTbS5Gcq+sDKtrpLZA37CuDHjORC997REk/bwjVYOycfvp/iQ2nGvxvKaOGwngR01JDDMxnZhBNZdduc9jqXdpwfdvKJWq66jEHZ/g9EXbihl5+1Vd472q7UeKmkGSCTrIxdmn/7/pWVaWPjK7fMcqtMDHiGXvwMmSJYFwujd5jjcWq5EvWERN9ZUSAQekyysz9xDaHwz7tmqv+7H069Mjhj0mQTKJ4OY70bXW80kQmroiaRVlmhgYVD3Y52xxHMGOWqq2dNnEyk0eoIQTOeA/Ngpm7+o4mQSOy53NDgDM/f+D3lTz2Svb8BAXZ00QKlcdbewcCQR2Kr2pFznvvT0e++6c3IdVQHVDvAVdgrIJNW5TTyc6gIbOS2SW7I1uS30ozTbErnzoFmUj7LVLf70QdWM90=) 2025-05-06 00:26:18.003424 | orchestrator | 2025-05-06 00:26:18.004336 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-06 00:26:18.004814 | orchestrator | Tuesday 06 May 2025 00:26:17 +0000 (0:00:01.082) 0:00:23.238 *********** 2025-05-06 00:26:19.036794 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFIpWxgMg5A8xGMLhFJTYq3Ux9R/et5Q2E8yQLo/BAW5) 2025-05-06 00:26:19.037033 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCv33aJDgY/NwpJL1OvnGatJ+Kl7mVYn5OAxQCC+5D1HCQGrKA0Z/LFUYaK02ufKx1kHjaYlnlYumJ3qHXSeJoRCJ75U5I10boYD2ugMcbineO1bNDpXTYj8ye91lFzckOZdCqpKBS51dnqfRmFTV0qdSw6rhkmx8zEe+0GGJGl43E0AS4Xa49kUqHlOqzp/3ZD5KN0O0s29hnHH2nToxDNVqf/ufVnZDSkGa679bD3yeDEbjJ2HsWrB6TsIEIMJTjyIZud62kr/fEOCAEHR04EIZXy47DEqVEH/MLAxlRv5hYz2ShR7uCfMNOCanJ+CqbOLc734y6q1g4XaN6INZseCdALAAQPj4lPzkeqx1hvK0xmnlX5rsXz0oN0OFmywW0lR1iv2q4bgupik9sG7lqubExiOcaOVWb/lv7O9qKxbLkWb9RUHta1EV+3ZuBG2k+A3M8ogAfrBtv39LWT/rUcsgL3gr1Y/ElDFbG3oDP7Oel5lTqI3V7Y9Wno2KnmKIU=) 2025-05-06 00:26:19.037771 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJL6ZD20vA6thEFfXg69wDDatA0oJBzuUE9jfU6L+ZpdVFi/Njt918J07IkFkOnOqkz/0ByrQpF3xaG2D0UhghQ=) 2025-05-06 00:26:19.039022 | orchestrator | 2025-05-06 00:26:19.040070 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-06 00:26:19.040307 | orchestrator | Tuesday 06 May 2025 00:26:19 +0000 (0:00:01.035) 0:00:24.274 *********** 2025-05-06 00:26:20.075961 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC5YiVBPTy4bu0gGLFRGzogx3XOwseDmXVZVmba+rHsUsKy2/FTUwuxToKJeYDFeZMK/mp1V9uYjOoq8jIjK/9M=) 2025-05-06 00:26:21.107425 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbcrgKHJ+XD8xTCMpktFKTJX4nQ11iO3o6+qsg+EufeRWZxUn5INzfcHe/0xi1Tb1eZnpGjqs3ITXhx7XpWSYrEq5fUQRuH47a2IP4v5xSlRGacaVHauxSii4l1rZ6+7E3yy5LzT+nEg1wUmz5HMip+f82Qe/MRYZ6ULIq3HHPwM/DR5Sfa6jKuXVuHpkSpYmyjBe4OlZf9OnuylcShauXn4mY0hXvOBUyXSv1WdlHQv1av+rELeuA8+i+Zzka7wnlRK1I932mQ+A7Y21ITYO3ffBWQJ8HU/loVC+85SS9DhoCjEBm3MGmoHdRpAQA1dpp2VyCm+nhfZq82+rpAkiqWrCcyOnuSEH7QUkd6qKopB10WzfhUwq5QVOyh6gyQDH4GocRr2/vZeuTOD6BLCY7vl1hgp3QnGkwori2rSpEogKVjG76K8GYL3AzFE78T6WzbDQDPcYMO8/s1RuQH8SE17EjEPS+GD0Xld+DXl9rt14GlJHMuX7QxFyg04/JJIM=) 2025-05-06 00:26:21.107653 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIbIqtPxqeU9cxDOg9lCCMs2VgZIu6t0uRs52ycICBqJ) 2025-05-06 00:26:21.107681 | orchestrator | 2025-05-06 00:26:21.107697 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-06 00:26:21.107713 | orchestrator | Tuesday 06 May 2025 00:26:20 +0000 (0:00:01.035) 0:00:25.310 *********** 2025-05-06 00:26:21.107748 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjP2BTQzBiJ7c5SwPznnAybktojYrbL/aI2NacDIZriwXSPdW22xIZdLZ4YhX+XarxXTejhmooe0MRR03TqnYLEOYuZwd1+nE7a1Whaw+X2g/Naq5qH83K0amPjUrGB/cU4fqpOi2efjIVKeqbqcqjL/BKw0YeTmNGqdx1QTmjm9WbN6hx1WuEu8JukvQ+rjBapIA5SYcbAEiJ9XTssbElc9Fq0MIn56Z06q4Ev1RZddKOmdPllBSNm9JUJukrD8IaLv2dE046xjubDNtYP7aS+Z+G41glEAWdAS8OVJ2VYPIXh8ZRr0SAk9VZunafeY6E6lhpzcjvGTROhnFsCg9vMY362ijcehVwLU3fB/fyBIg+qQe77RLi/kmXxjYARAENYGaaxXLedp8kb8/9b3AM2NHPv427EwsiKeJSTklvl7x1a/QJjtKJEeuq9Vly0/Cje/iuTbGCxAqE6uoMFPO298Gr8VPyfS90D8QYOPpdlpMmW0kXPjPTtFAv5D1mxj8=) 2025-05-06 00:26:21.108122 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNYxiwgU1TNQWJ7yik70L9u/5gcl6t9Zeq56T/hyftYxEgBw+z5BB+0W5vo5TudgPrJmOwbSPReAHMADdMzLYXA=) 2025-05-06 00:26:21.108645 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOonGX/gNB1X1MQhbzWvNfUtUhMwxgm+hT2HXRK5zUk/) 2025-05-06 00:26:21.108695 | orchestrator | 2025-05-06 00:26:21.109136 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-06 00:26:21.109671 | orchestrator | Tuesday 06 May 2025 00:26:21 +0000 (0:00:01.035) 0:00:26.346 *********** 2025-05-06 00:26:21.263293 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-06 00:26:21.264278 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-06 00:26:21.264428 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-06 00:26:21.265607 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-06 00:26:21.266104 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-06 00:26:21.266610 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-06 00:26:21.267024 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-06 00:26:21.267581 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:26:21.268116 | orchestrator | 2025-05-06 00:26:21.268522 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-06 00:26:21.268983 | orchestrator | Tuesday 06 May 2025 00:26:21 +0000 (0:00:00.155) 0:00:26.501 *********** 2025-05-06 00:26:21.338629 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:26:21.339574 | orchestrator | 2025-05-06 00:26:21.339624 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-06 00:26:21.340510 | orchestrator | Tuesday 06 May 2025 00:26:21 +0000 (0:00:00.075) 0:00:26.576 *********** 2025-05-06 00:26:21.393629 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:26:21.394361 | orchestrator | 2025-05-06 00:26:21.394595 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-06 00:26:21.395074 | orchestrator | Tuesday 06 May 2025 00:26:21 +0000 (0:00:00.056) 0:00:26.632 *********** 2025-05-06 00:26:22.112447 | orchestrator | changed: [testbed-manager] 2025-05-06 00:26:22.114165 | orchestrator | 2025-05-06 00:26:22.114716 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:26:22.116155 | orchestrator | 2025-05-06 00:26:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-06 00:26:22.117468 | orchestrator | 2025-05-06 00:26:22 | INFO  | Please wait and do not abort execution. 2025-05-06 00:26:22.117499 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-06 00:26:22.118331 | orchestrator | 2025-05-06 00:26:22.119521 | orchestrator | Tuesday 06 May 2025 00:26:22 +0000 (0:00:00.716) 0:00:27.349 *********** 2025-05-06 00:26:22.120839 | orchestrator | =============================================================================== 2025-05-06 00:26:22.120868 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.96s 2025-05-06 00:26:22.121083 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.26s 2025-05-06 00:26:22.121941 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-05-06 00:26:22.122852 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-06 00:26:22.123837 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-06 00:26:22.124517 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-06 00:26:22.125359 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-06 00:26:22.126103 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-06 00:26:22.126917 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-06 00:26:22.127677 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-06 00:26:22.128008 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-06 00:26:22.129022 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-06 00:26:22.129777 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-06 00:26:22.130614 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-06 00:26:22.130961 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-05-06 00:26:22.131684 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-05-06 00:26:22.132471 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.72s 2025-05-06 00:26:22.133491 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-05-06 00:26:22.134202 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-05-06 00:26:22.134235 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-05-06 00:26:22.452678 | orchestrator | + osism apply squid 2025-05-06 00:26:23.842153 | orchestrator | 2025-05-06 00:26:23 | INFO  | Task b30789dd-dbe0-4574-97fe-7711c97c3891 (squid) was prepared for execution. 2025-05-06 00:26:26.763222 | orchestrator | 2025-05-06 00:26:23 | INFO  | It takes a moment until task b30789dd-dbe0-4574-97fe-7711c97c3891 (squid) has been started and output is visible here. 2025-05-06 00:26:26.763395 | orchestrator | 2025-05-06 00:26:26.764605 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-06 00:26:26.765775 | orchestrator | 2025-05-06 00:26:26.765825 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-06 00:26:26.766605 | orchestrator | Tuesday 06 May 2025 00:26:26 +0000 (0:00:00.103) 0:00:00.103 *********** 2025-05-06 00:26:26.855729 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-06 00:26:26.856028 | orchestrator | 2025-05-06 00:26:26.857008 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-06 00:26:26.858010 | orchestrator | Tuesday 06 May 2025 00:26:26 +0000 (0:00:00.095) 0:00:00.198 *********** 2025-05-06 00:26:28.188052 | orchestrator | ok: [testbed-manager] 2025-05-06 00:26:28.188290 | orchestrator | 2025-05-06 00:26:28.189268 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-06 00:26:28.190014 | orchestrator | Tuesday 06 May 2025 00:26:28 +0000 (0:00:01.330) 0:00:01.529 *********** 2025-05-06 00:26:29.287320 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-06 00:26:29.287705 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-06 00:26:29.288335 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-06 00:26:29.289222 | orchestrator | 2025-05-06 00:26:29.289454 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-06 00:26:29.290192 | orchestrator | Tuesday 06 May 2025 00:26:29 +0000 (0:00:01.099) 0:00:02.629 *********** 2025-05-06 00:26:30.348658 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-06 00:26:30.349047 | orchestrator | 2025-05-06 00:26:30.349115 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-06 00:26:30.349703 | orchestrator | Tuesday 06 May 2025 00:26:30 +0000 (0:00:01.057) 0:00:03.686 *********** 2025-05-06 00:26:30.678631 | orchestrator | ok: [testbed-manager] 2025-05-06 00:26:30.679022 | orchestrator | 2025-05-06 00:26:30.679715 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-06 00:26:30.680494 | orchestrator | Tuesday 06 May 2025 00:26:30 +0000 (0:00:00.335) 0:00:04.021 *********** 2025-05-06 00:26:31.626585 | orchestrator | changed: [testbed-manager] 2025-05-06 00:26:31.627005 | orchestrator | 2025-05-06 00:26:31.627755 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-06 00:26:31.628747 | orchestrator | Tuesday 06 May 2025 00:26:31 +0000 (0:00:00.947) 0:00:04.968 *********** 2025-05-06 00:27:03.307246 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-06 00:27:15.587670 | orchestrator | ok: [testbed-manager] 2025-05-06 00:27:15.587829 | orchestrator | 2025-05-06 00:27:15.587853 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-06 00:27:15.587870 | orchestrator | Tuesday 06 May 2025 00:27:03 +0000 (0:00:31.674) 0:00:36.643 *********** 2025-05-06 00:27:15.587904 | orchestrator | changed: [testbed-manager] 2025-05-06 00:28:15.663478 | orchestrator | 2025-05-06 00:28:15.663676 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-06 00:28:15.663698 | orchestrator | Tuesday 06 May 2025 00:27:15 +0000 (0:00:12.277) 0:00:48.921 *********** 2025-05-06 00:28:15.663729 | orchestrator | Pausing for 60 seconds 2025-05-06 00:28:15.664610 | orchestrator | changed: [testbed-manager] 2025-05-06 00:28:15.664643 | orchestrator | 2025-05-06 00:28:15.664667 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-06 00:28:15.666252 | orchestrator | Tuesday 06 May 2025 00:28:15 +0000 (0:01:00.080) 0:01:49.002 *********** 2025-05-06 00:28:15.723386 | orchestrator | ok: [testbed-manager] 2025-05-06 00:28:15.724544 | orchestrator | 2025-05-06 00:28:15.725551 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-06 00:28:15.725940 | orchestrator | Tuesday 06 May 2025 00:28:15 +0000 (0:00:00.064) 0:01:49.066 *********** 2025-05-06 00:28:16.338717 | orchestrator | changed: [testbed-manager] 2025-05-06 00:28:16.338903 | orchestrator | 2025-05-06 00:28:16.338935 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:28:16.339604 | orchestrator | 2025-05-06 00:28:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-06 00:28:16.340350 | orchestrator | 2025-05-06 00:28:16 | INFO  | Please wait and do not abort execution. 2025-05-06 00:28:16.340395 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:28:16.341651 | orchestrator | 2025-05-06 00:28:16.342142 | orchestrator | Tuesday 06 May 2025 00:28:16 +0000 (0:00:00.613) 0:01:49.679 *********** 2025-05-06 00:28:16.343194 | orchestrator | =============================================================================== 2025-05-06 00:28:16.343785 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-05-06 00:28:16.344603 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.67s 2025-05-06 00:28:16.345182 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.28s 2025-05-06 00:28:16.345688 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.33s 2025-05-06 00:28:16.346185 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.10s 2025-05-06 00:28:16.346635 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.06s 2025-05-06 00:28:16.347149 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.95s 2025-05-06 00:28:16.347871 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.61s 2025-05-06 00:28:16.348282 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2025-05-06 00:28:16.348507 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-05-06 00:28:16.349113 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-05-06 00:28:16.762839 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-06 00:28:16.815450 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-05-06 00:28:16.815592 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-06 00:28:16.816292 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-06 00:28:16.820068 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-06 00:28:16.820106 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-05-06 00:28:16.820130 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-06 00:28:16.825638 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-06 00:28:16.831877 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-06 00:28:18.224847 | orchestrator | 2025-05-06 00:28:18 | INFO  | Task 2b4e3cde-dfc4-404e-8ccd-9c666303588a (operator) was prepared for execution. 2025-05-06 00:28:21.133640 | orchestrator | 2025-05-06 00:28:18 | INFO  | It takes a moment until task 2b4e3cde-dfc4-404e-8ccd-9c666303588a (operator) has been started and output is visible here. 2025-05-06 00:28:21.133795 | orchestrator | 2025-05-06 00:28:21.133871 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-06 00:28:21.133894 | orchestrator | 2025-05-06 00:28:21.137178 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-06 00:28:24.509824 | orchestrator | Tuesday 06 May 2025 00:28:21 +0000 (0:00:00.083) 0:00:00.083 *********** 2025-05-06 00:28:24.509950 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:28:24.513219 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:28:24.513249 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:28:24.513264 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:28:24.513285 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:28:25.281867 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:28:25.281969 | orchestrator | 2025-05-06 00:28:25.281989 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-06 00:28:25.282008 | orchestrator | Tuesday 06 May 2025 00:28:24 +0000 (0:00:03.377) 0:00:03.461 *********** 2025-05-06 00:28:25.282125 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:28:25.282806 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:28:25.283422 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:28:25.284489 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:28:25.285701 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:28:25.286338 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:28:25.289644 | orchestrator | 2025-05-06 00:28:25.290091 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-06 00:28:25.290123 | orchestrator | 2025-05-06 00:28:25.290145 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-06 00:28:25.293123 | orchestrator | Tuesday 06 May 2025 00:28:25 +0000 (0:00:00.772) 0:00:04.233 *********** 2025-05-06 00:28:25.342008 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:28:25.363013 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:28:25.379285 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:28:25.424594 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:28:25.425165 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:28:25.426083 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:28:25.427198 | orchestrator | 2025-05-06 00:28:25.427229 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-06 00:28:25.427538 | orchestrator | Tuesday 06 May 2025 00:28:25 +0000 (0:00:00.142) 0:00:04.376 *********** 2025-05-06 00:28:25.482590 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:28:25.506180 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:28:25.528787 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:28:25.567821 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:28:25.568310 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:28:25.569137 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:28:25.569722 | orchestrator | 2025-05-06 00:28:25.570894 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-06 00:28:25.571842 | orchestrator | Tuesday 06 May 2025 00:28:25 +0000 (0:00:00.144) 0:00:04.520 *********** 2025-05-06 00:28:26.197853 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:28:26.198082 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:28:26.199862 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:28:26.201140 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:28:26.202543 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:28:26.204111 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:28:26.204927 | orchestrator | 2025-05-06 00:28:26.206271 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-06 00:28:26.206689 | orchestrator | Tuesday 06 May 2025 00:28:26 +0000 (0:00:00.627) 0:00:05.147 *********** 2025-05-06 00:28:27.086716 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:28:27.087593 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:28:27.087639 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:28:27.088432 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:28:27.090486 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:28:27.090877 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:28:27.091332 | orchestrator | 2025-05-06 00:28:27.092007 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-06 00:28:27.092607 | orchestrator | Tuesday 06 May 2025 00:28:27 +0000 (0:00:00.888) 0:00:06.036 *********** 2025-05-06 00:28:28.238180 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-06 00:28:28.238740 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-06 00:28:28.240593 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-06 00:28:28.241609 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-06 00:28:28.242500 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-06 00:28:28.244057 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-06 00:28:28.245260 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-06 00:28:28.246204 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-06 00:28:28.246547 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-06 00:28:28.248070 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-06 00:28:28.248648 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-06 00:28:28.252563 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-06 00:28:28.253310 | orchestrator | 2025-05-06 00:28:28.254099 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-06 00:28:28.254468 | orchestrator | Tuesday 06 May 2025 00:28:28 +0000 (0:00:01.150) 0:00:07.186 *********** 2025-05-06 00:28:29.436055 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:28:29.436942 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:28:29.436988 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:28:29.440020 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:28:29.444800 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:28:29.446221 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:28:29.446854 | orchestrator | 2025-05-06 00:28:29.447584 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-06 00:28:29.448274 | orchestrator | Tuesday 06 May 2025 00:28:29 +0000 (0:00:01.194) 0:00:08.381 *********** 2025-05-06 00:28:30.535727 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-06 00:28:30.537209 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-06 00:28:30.612948 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-06 00:28:30.613075 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-06 00:28:30.613218 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-06 00:28:30.613249 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-06 00:28:30.613276 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-06 00:28:30.614002 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-06 00:28:30.614248 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-06 00:28:30.614875 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-06 00:28:30.615605 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-06 00:28:30.616143 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-06 00:28:30.616980 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-06 00:28:30.617665 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-06 00:28:30.618337 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-06 00:28:30.619013 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-06 00:28:30.619495 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-06 00:28:30.620627 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-06 00:28:30.620814 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-06 00:28:30.621389 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-06 00:28:30.621872 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-06 00:28:30.622282 | orchestrator | 2025-05-06 00:28:30.622651 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-06 00:28:30.623094 | orchestrator | Tuesday 06 May 2025 00:28:30 +0000 (0:00:01.182) 0:00:09.564 *********** 2025-05-06 00:28:31.198433 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:28:31.199285 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:28:31.201236 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:28:31.202762 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:28:31.203703 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:28:31.205354 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:28:31.206831 | orchestrator | 2025-05-06 00:28:31.207709 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-06 00:28:31.208771 | orchestrator | Tuesday 06 May 2025 00:28:31 +0000 (0:00:00.584) 0:00:10.148 *********** 2025-05-06 00:28:31.302009 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:28:31.325448 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:28:31.380849 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:28:31.381049 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:28:31.382208 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:28:31.383961 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:28:31.385039 | orchestrator | 2025-05-06 00:28:31.385759 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-06 00:28:31.386904 | orchestrator | Tuesday 06 May 2025 00:28:31 +0000 (0:00:00.183) 0:00:10.332 *********** 2025-05-06 00:28:32.133132 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-06 00:28:32.136259 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-06 00:28:32.136354 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:28:32.136744 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:28:32.136780 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-06 00:28:32.136803 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:28:32.137443 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-06 00:28:32.138180 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:28:32.138777 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-06 00:28:32.139125 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:28:32.139758 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-06 00:28:32.140252 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:28:32.140946 | orchestrator | 2025-05-06 00:28:32.141394 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-06 00:28:32.141710 | orchestrator | Tuesday 06 May 2025 00:28:32 +0000 (0:00:00.750) 0:00:11.083 *********** 2025-05-06 00:28:32.195987 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:28:32.223214 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:28:32.243947 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:28:32.270896 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:28:32.274369 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:28:32.276415 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:28:32.276460 | orchestrator | 2025-05-06 00:28:32.279768 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-06 00:28:32.280599 | orchestrator | Tuesday 06 May 2025 00:28:32 +0000 (0:00:00.140) 0:00:11.223 *********** 2025-05-06 00:28:32.312892 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:28:32.331896 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:28:32.381961 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:28:32.416250 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:28:32.416496 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:28:32.416655 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:28:32.417425 | orchestrator | 2025-05-06 00:28:32.421774 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-06 00:28:32.422839 | orchestrator | Tuesday 06 May 2025 00:28:32 +0000 (0:00:00.145) 0:00:11.369 *********** 2025-05-06 00:28:32.492641 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:28:32.527379 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:28:32.546431 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:28:32.579705 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:28:32.583467 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:28:32.585342 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:28:32.587575 | orchestrator | 2025-05-06 00:28:32.588261 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-06 00:28:32.588770 | orchestrator | Tuesday 06 May 2025 00:28:32 +0000 (0:00:00.163) 0:00:11.532 *********** 2025-05-06 00:28:33.216971 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:28:33.217776 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:28:33.218360 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:28:33.219136 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:28:33.220605 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:28:33.220932 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:28:33.222187 | orchestrator | 2025-05-06 00:28:33.222676 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-06 00:28:33.223322 | orchestrator | Tuesday 06 May 2025 00:28:33 +0000 (0:00:00.636) 0:00:12.169 *********** 2025-05-06 00:28:33.299991 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:28:33.321744 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:28:33.417775 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:28:33.418358 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:28:33.419569 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:28:33.421002 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:28:33.421657 | orchestrator | 2025-05-06 00:28:33.423220 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:28:33.429044 | orchestrator | 2025-05-06 00:28:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-06 00:28:33.429389 | orchestrator | 2025-05-06 00:28:33 | INFO  | Please wait and do not abort execution. 2025-05-06 00:28:33.430478 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-06 00:28:33.431335 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-06 00:28:33.432382 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-06 00:28:33.433204 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-06 00:28:33.433996 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-06 00:28:33.434529 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-06 00:28:33.435399 | orchestrator | 2025-05-06 00:28:33.435985 | orchestrator | Tuesday 06 May 2025 00:28:33 +0000 (0:00:00.200) 0:00:12.370 *********** 2025-05-06 00:28:33.437172 | orchestrator | =============================================================================== 2025-05-06 00:28:33.437723 | orchestrator | Gathering Facts --------------------------------------------------------- 3.38s 2025-05-06 00:28:33.438532 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.19s 2025-05-06 00:28:33.439352 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.18s 2025-05-06 00:28:33.439565 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.15s 2025-05-06 00:28:33.439928 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.89s 2025-05-06 00:28:33.440976 | orchestrator | Do not require tty for all users ---------------------------------------- 0.77s 2025-05-06 00:28:33.441351 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.75s 2025-05-06 00:28:33.442320 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.64s 2025-05-06 00:28:33.446141 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.63s 2025-05-06 00:28:33.446175 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.58s 2025-05-06 00:28:33.446197 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.20s 2025-05-06 00:28:34.044405 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2025-05-06 00:28:34.044609 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-05-06 00:28:34.044632 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2025-05-06 00:28:34.044647 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.14s 2025-05-06 00:28:34.044661 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2025-05-06 00:28:34.044676 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-05-06 00:28:34.044706 | orchestrator | + osism apply --environment custom facts 2025-05-06 00:28:35.417864 | orchestrator | 2025-05-06 00:28:35 | INFO  | Trying to run play facts in environment custom 2025-05-06 00:28:35.463633 | orchestrator | 2025-05-06 00:28:35 | INFO  | Task 16be2f2c-8956-435b-bd8a-42f354d55e25 (facts) was prepared for execution. 2025-05-06 00:28:38.474324 | orchestrator | 2025-05-06 00:28:35 | INFO  | It takes a moment until task 16be2f2c-8956-435b-bd8a-42f354d55e25 (facts) has been started and output is visible here. 2025-05-06 00:28:38.474561 | orchestrator | 2025-05-06 00:28:38.474967 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-06 00:28:38.476289 | orchestrator | 2025-05-06 00:28:38.476712 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-06 00:28:38.477448 | orchestrator | Tuesday 06 May 2025 00:28:38 +0000 (0:00:00.078) 0:00:00.078 *********** 2025-05-06 00:28:39.690169 | orchestrator | ok: [testbed-manager] 2025-05-06 00:28:40.793693 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:28:40.796302 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:28:40.796765 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:28:40.796796 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:28:40.796817 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:28:40.799139 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:28:40.802180 | orchestrator | 2025-05-06 00:28:40.802214 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-06 00:28:41.910119 | orchestrator | Tuesday 06 May 2025 00:28:40 +0000 (0:00:02.316) 0:00:02.395 *********** 2025-05-06 00:28:41.910260 | orchestrator | ok: [testbed-manager] 2025-05-06 00:28:42.778969 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:28:42.779404 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:28:42.779454 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:28:42.780174 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:28:42.781687 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:28:42.782717 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:28:42.784496 | orchestrator | 2025-05-06 00:28:42.785311 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-06 00:28:42.786595 | orchestrator | 2025-05-06 00:28:42.787454 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-06 00:28:42.846091 | orchestrator | Tuesday 06 May 2025 00:28:42 +0000 (0:00:01.987) 0:00:04.383 *********** 2025-05-06 00:28:42.846208 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:28:42.912602 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:28:42.913665 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:28:42.914218 | orchestrator | 2025-05-06 00:28:42.914632 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-06 00:28:42.915040 | orchestrator | Tuesday 06 May 2025 00:28:42 +0000 (0:00:00.134) 0:00:04.518 *********** 2025-05-06 00:28:43.035715 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:28:43.035856 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:28:43.035874 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:28:43.036048 | orchestrator | 2025-05-06 00:28:43.038997 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-06 00:28:43.164346 | orchestrator | Tuesday 06 May 2025 00:28:43 +0000 (0:00:00.124) 0:00:04.642 *********** 2025-05-06 00:28:43.164479 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:28:43.165037 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:28:43.167091 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:28:43.167190 | orchestrator | 2025-05-06 00:28:43.167212 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-06 00:28:43.167234 | orchestrator | Tuesday 06 May 2025 00:28:43 +0000 (0:00:00.125) 0:00:04.768 *********** 2025-05-06 00:28:43.311796 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:28:43.311990 | orchestrator | 2025-05-06 00:28:43.312329 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-06 00:28:43.312957 | orchestrator | Tuesday 06 May 2025 00:28:43 +0000 (0:00:00.150) 0:00:04.918 *********** 2025-05-06 00:28:43.767598 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:28:43.770814 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:28:43.770932 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:28:43.771003 | orchestrator | 2025-05-06 00:28:43.771026 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-06 00:28:43.771205 | orchestrator | Tuesday 06 May 2025 00:28:43 +0000 (0:00:00.453) 0:00:05.371 *********** 2025-05-06 00:28:43.856111 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:28:43.856269 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:28:43.856294 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:28:43.856401 | orchestrator | 2025-05-06 00:28:43.856733 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-06 00:28:43.860553 | orchestrator | Tuesday 06 May 2025 00:28:43 +0000 (0:00:00.091) 0:00:05.463 *********** 2025-05-06 00:28:44.879664 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:28:44.880553 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:28:44.884153 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:28:45.305054 | orchestrator | 2025-05-06 00:28:45.305173 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-06 00:28:45.305193 | orchestrator | Tuesday 06 May 2025 00:28:44 +0000 (0:00:01.022) 0:00:06.486 *********** 2025-05-06 00:28:45.305222 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:28:45.305791 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:28:45.306625 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:28:45.307989 | orchestrator | 2025-05-06 00:28:45.309304 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-06 00:28:45.310422 | orchestrator | Tuesday 06 May 2025 00:28:45 +0000 (0:00:00.423) 0:00:06.909 *********** 2025-05-06 00:28:46.257628 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:28:46.257818 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:28:46.258182 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:28:46.259912 | orchestrator | 2025-05-06 00:28:46.260254 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-06 00:28:46.260630 | orchestrator | Tuesday 06 May 2025 00:28:46 +0000 (0:00:00.949) 0:00:07.859 *********** 2025-05-06 00:28:59.074675 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:28:59.144836 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:28:59.144964 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:28:59.144986 | orchestrator | 2025-05-06 00:28:59.145003 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-06 00:28:59.145041 | orchestrator | Tuesday 06 May 2025 00:28:59 +0000 (0:00:12.788) 0:00:20.647 *********** 2025-05-06 00:28:59.145073 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:28:59.149268 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:28:59.150460 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:28:59.150490 | orchestrator | 2025-05-06 00:28:59.150538 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-06 00:28:59.151097 | orchestrator | Tuesday 06 May 2025 00:28:59 +0000 (0:00:00.103) 0:00:20.751 *********** 2025-05-06 00:29:06.679610 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:29:06.680884 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:29:06.682137 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:29:06.683034 | orchestrator | 2025-05-06 00:29:06.684010 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-06 00:29:06.684924 | orchestrator | Tuesday 06 May 2025 00:29:06 +0000 (0:00:07.528) 0:00:28.279 *********** 2025-05-06 00:29:07.113461 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:07.114677 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:07.115134 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:29:07.116444 | orchestrator | 2025-05-06 00:29:07.117626 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-06 00:29:07.118113 | orchestrator | Tuesday 06 May 2025 00:29:07 +0000 (0:00:00.438) 0:00:28.718 *********** 2025-05-06 00:29:10.600895 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-06 00:29:10.601137 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-06 00:29:10.602090 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-06 00:29:10.602388 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-06 00:29:10.603376 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-06 00:29:10.603601 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-06 00:29:10.604101 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-06 00:29:10.604749 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-06 00:29:10.605383 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-06 00:29:10.605851 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-06 00:29:10.606410 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-06 00:29:10.607015 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-06 00:29:10.607258 | orchestrator | 2025-05-06 00:29:10.607796 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-06 00:29:10.608232 | orchestrator | Tuesday 06 May 2025 00:29:10 +0000 (0:00:03.479) 0:00:32.197 *********** 2025-05-06 00:29:11.719440 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:11.720126 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:29:11.721210 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:11.721288 | orchestrator | 2025-05-06 00:29:11.722614 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-06 00:29:11.723263 | orchestrator | 2025-05-06 00:29:11.723761 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-06 00:29:11.724417 | orchestrator | Tuesday 06 May 2025 00:29:11 +0000 (0:00:01.125) 0:00:33.323 *********** 2025-05-06 00:29:13.436125 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:29:16.808824 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:29:16.809068 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:29:16.809130 | orchestrator | ok: [testbed-manager] 2025-05-06 00:29:16.809146 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:16.809160 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:16.809180 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:29:16.809233 | orchestrator | 2025-05-06 00:29:16.809255 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:29:16.810062 | orchestrator | 2025-05-06 00:29:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-06 00:29:16.810184 | orchestrator | 2025-05-06 00:29:16 | INFO  | Please wait and do not abort execution. 2025-05-06 00:29:16.810212 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:29:16.810716 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:29:16.811262 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:29:16.811488 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:29:16.812021 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:29:16.812116 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:29:16.813088 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:29:16.813248 | orchestrator | 2025-05-06 00:29:16.813743 | orchestrator | Tuesday 06 May 2025 00:29:16 +0000 (0:00:05.090) 0:00:38.414 *********** 2025-05-06 00:29:16.813921 | orchestrator | =============================================================================== 2025-05-06 00:29:16.814100 | orchestrator | osism.commons.repository : Update package cache ------------------------ 12.79s 2025-05-06 00:29:16.814296 | orchestrator | Install required packages (Debian) -------------------------------------- 7.53s 2025-05-06 00:29:16.814530 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.09s 2025-05-06 00:29:16.814644 | orchestrator | Copy fact files --------------------------------------------------------- 3.48s 2025-05-06 00:29:16.815019 | orchestrator | Create custom facts directory ------------------------------------------- 2.32s 2025-05-06 00:29:16.815364 | orchestrator | Copy fact file ---------------------------------------------------------- 1.99s 2025-05-06 00:29:16.815592 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.13s 2025-05-06 00:29:16.816135 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.02s 2025-05-06 00:29:16.816304 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 0.95s 2025-05-06 00:29:16.816652 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2025-05-06 00:29:16.816953 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2025-05-06 00:29:16.817356 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.42s 2025-05-06 00:29:16.817592 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-05-06 00:29:16.817813 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.14s 2025-05-06 00:29:16.818120 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.13s 2025-05-06 00:29:16.818252 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.12s 2025-05-06 00:29:16.818555 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-05-06 00:29:16.818721 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.09s 2025-05-06 00:29:17.200782 | orchestrator | + osism apply bootstrap 2025-05-06 00:29:18.651057 | orchestrator | 2025-05-06 00:29:18 | INFO  | Task cf30da8b-cd60-4459-8562-a4c5a65eb83d (bootstrap) was prepared for execution. 2025-05-06 00:29:21.742807 | orchestrator | 2025-05-06 00:29:18 | INFO  | It takes a moment until task cf30da8b-cd60-4459-8562-a4c5a65eb83d (bootstrap) has been started and output is visible here. 2025-05-06 00:29:21.743075 | orchestrator | 2025-05-06 00:29:21.743173 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-06 00:29:21.744344 | orchestrator | 2025-05-06 00:29:21.745908 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-06 00:29:21.817492 | orchestrator | Tuesday 06 May 2025 00:29:21 +0000 (0:00:00.104) 0:00:00.104 *********** 2025-05-06 00:29:21.817651 | orchestrator | ok: [testbed-manager] 2025-05-06 00:29:21.838443 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:21.868063 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:21.892357 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:29:21.977368 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:29:21.977580 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:29:21.978228 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:29:21.981599 | orchestrator | 2025-05-06 00:29:21.982322 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-06 00:29:21.982660 | orchestrator | 2025-05-06 00:29:21.983070 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-06 00:29:21.983578 | orchestrator | Tuesday 06 May 2025 00:29:21 +0000 (0:00:00.238) 0:00:00.342 *********** 2025-05-06 00:29:26.398485 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:29:26.398707 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:29:26.399056 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:29:26.400662 | orchestrator | ok: [testbed-manager] 2025-05-06 00:29:26.401478 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:29:26.402915 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:26.403518 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:26.404689 | orchestrator | 2025-05-06 00:29:26.405912 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-06 00:29:26.406693 | orchestrator | 2025-05-06 00:29:26.407629 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-06 00:29:26.408406 | orchestrator | Tuesday 06 May 2025 00:29:26 +0000 (0:00:04.418) 0:00:04.760 *********** 2025-05-06 00:29:26.484685 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-06 00:29:26.520965 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-06 00:29:26.521095 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-06 00:29:26.521171 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-06 00:29:26.569634 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-06 00:29:26.569775 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-06 00:29:26.569849 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-06 00:29:26.572105 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:29:26.572158 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-06 00:29:26.572736 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-06 00:29:26.826389 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-06 00:29:26.827270 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:29:26.827325 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-06 00:29:26.829393 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-06 00:29:26.830315 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-06 00:29:26.830375 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-06 00:29:26.831845 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-06 00:29:26.832674 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:29:26.833996 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-06 00:29:26.834665 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-06 00:29:26.835702 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:29:26.836484 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-06 00:29:26.837261 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-06 00:29:26.838113 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-06 00:29:26.838939 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-06 00:29:26.839799 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-06 00:29:26.840326 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-06 00:29:26.840903 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:29:26.841479 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-06 00:29:26.841713 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-06 00:29:26.842383 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-06 00:29:26.842889 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-06 00:29:26.843352 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-06 00:29:26.844023 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-06 00:29:26.844390 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-06 00:29:26.844891 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:29:26.845417 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-06 00:29:26.845877 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-06 00:29:26.846542 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-06 00:29:26.846717 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-06 00:29:26.847467 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-06 00:29:26.847741 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-06 00:29:26.848214 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-06 00:29:26.852026 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-06 00:29:26.852632 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-06 00:29:26.852982 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:29:26.853467 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-06 00:29:26.853687 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-06 00:29:26.854090 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-06 00:29:26.854316 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:29:26.854655 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-06 00:29:26.855024 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-06 00:29:26.856074 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-06 00:29:26.856298 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:29:26.856334 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-06 00:29:26.856364 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:29:26.856645 | orchestrator | 2025-05-06 00:29:26.856678 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-06 00:29:26.859066 | orchestrator | 2025-05-06 00:29:26.894310 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-05-06 00:29:26.894393 | orchestrator | Tuesday 06 May 2025 00:29:26 +0000 (0:00:00.428) 0:00:05.189 *********** 2025-05-06 00:29:26.894419 | orchestrator | ok: [testbed-manager] 2025-05-06 00:29:26.918392 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:26.937638 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:26.958242 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:29:27.013910 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:29:27.014683 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:29:27.015738 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:29:27.016706 | orchestrator | 2025-05-06 00:29:27.017537 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-06 00:29:27.018339 | orchestrator | Tuesday 06 May 2025 00:29:27 +0000 (0:00:00.189) 0:00:05.378 *********** 2025-05-06 00:29:28.181350 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:29:28.181622 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:29:28.182154 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:29:28.182187 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:28.182567 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:28.183264 | orchestrator | ok: [testbed-manager] 2025-05-06 00:29:28.183654 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:29:28.184063 | orchestrator | 2025-05-06 00:29:28.184718 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-06 00:29:28.185425 | orchestrator | Tuesday 06 May 2025 00:29:28 +0000 (0:00:01.166) 0:00:06.545 *********** 2025-05-06 00:29:29.395786 | orchestrator | ok: [testbed-manager] 2025-05-06 00:29:29.399281 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:29:29.400649 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:29:29.400672 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:29:29.400688 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:29.401101 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:29:29.401484 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:29.402899 | orchestrator | 2025-05-06 00:29:29.403196 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-06 00:29:29.403954 | orchestrator | Tuesday 06 May 2025 00:29:29 +0000 (0:00:01.213) 0:00:07.758 *********** 2025-05-06 00:29:29.656680 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:29:29.658140 | orchestrator | 2025-05-06 00:29:29.658209 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-06 00:29:31.630862 | orchestrator | Tuesday 06 May 2025 00:29:29 +0000 (0:00:00.260) 0:00:08.019 *********** 2025-05-06 00:29:31.631040 | orchestrator | changed: [testbed-manager] 2025-05-06 00:29:31.632072 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:29:31.632607 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:29:31.633468 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:29:31.637295 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:29:31.720995 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:29:31.721099 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:29:31.721113 | orchestrator | 2025-05-06 00:29:31.721127 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-06 00:29:31.721139 | orchestrator | Tuesday 06 May 2025 00:29:31 +0000 (0:00:01.974) 0:00:09.993 *********** 2025-05-06 00:29:31.721166 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:29:31.915076 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:29:31.915996 | orchestrator | 2025-05-06 00:29:31.916060 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-06 00:29:31.916143 | orchestrator | Tuesday 06 May 2025 00:29:31 +0000 (0:00:00.283) 0:00:10.277 *********** 2025-05-06 00:29:32.855732 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:29:32.856409 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:29:32.857573 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:29:32.858707 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:29:32.859257 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:29:32.859902 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:29:32.860476 | orchestrator | 2025-05-06 00:29:32.860989 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-06 00:29:32.861414 | orchestrator | Tuesday 06 May 2025 00:29:32 +0000 (0:00:00.938) 0:00:11.215 *********** 2025-05-06 00:29:32.917805 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:29:33.438680 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:29:33.439394 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:29:33.440523 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:29:33.441906 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:29:33.443036 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:29:33.443800 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:29:33.444400 | orchestrator | 2025-05-06 00:29:33.445412 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-06 00:29:33.446008 | orchestrator | Tuesday 06 May 2025 00:29:33 +0000 (0:00:00.586) 0:00:11.801 *********** 2025-05-06 00:29:33.539136 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:29:33.564738 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:29:33.594336 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:29:33.845467 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:29:33.845666 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:29:33.847323 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:29:33.847852 | orchestrator | ok: [testbed-manager] 2025-05-06 00:29:33.847881 | orchestrator | 2025-05-06 00:29:33.847903 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-06 00:29:33.848612 | orchestrator | Tuesday 06 May 2025 00:29:33 +0000 (0:00:00.406) 0:00:12.208 *********** 2025-05-06 00:29:33.926810 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:29:33.966387 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:29:34.011754 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:29:34.054756 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:29:34.113277 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:29:34.113437 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:29:34.114204 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:29:34.114565 | orchestrator | 2025-05-06 00:29:34.114953 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-06 00:29:34.115533 | orchestrator | Tuesday 06 May 2025 00:29:34 +0000 (0:00:00.269) 0:00:12.478 *********** 2025-05-06 00:29:34.426993 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:29:34.427247 | orchestrator | 2025-05-06 00:29:34.427956 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-06 00:29:34.428673 | orchestrator | Tuesday 06 May 2025 00:29:34 +0000 (0:00:00.313) 0:00:12.791 *********** 2025-05-06 00:29:34.717463 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:29:34.718282 | orchestrator | 2025-05-06 00:29:34.718811 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-06 00:29:34.719725 | orchestrator | Tuesday 06 May 2025 00:29:34 +0000 (0:00:00.288) 0:00:13.080 *********** 2025-05-06 00:29:35.993066 | orchestrator | ok: [testbed-manager] 2025-05-06 00:29:35.994495 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:29:35.994771 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:29:35.996740 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:35.998108 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:29:35.999350 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:29:36.000216 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:36.001996 | orchestrator | 2025-05-06 00:29:36.002232 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-06 00:29:36.002269 | orchestrator | Tuesday 06 May 2025 00:29:35 +0000 (0:00:01.275) 0:00:14.356 *********** 2025-05-06 00:29:36.063444 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:29:36.091711 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:29:36.112384 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:29:36.140141 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:29:36.199347 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:29:36.199918 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:29:36.201746 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:29:36.202544 | orchestrator | 2025-05-06 00:29:36.203495 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-06 00:29:36.204478 | orchestrator | Tuesday 06 May 2025 00:29:36 +0000 (0:00:00.207) 0:00:14.563 *********** 2025-05-06 00:29:36.708486 | orchestrator | ok: [testbed-manager] 2025-05-06 00:29:36.709164 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:36.709211 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:36.710314 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:29:36.711472 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:29:36.711577 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:29:36.712997 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:29:36.713970 | orchestrator | 2025-05-06 00:29:36.714314 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-06 00:29:36.714858 | orchestrator | Tuesday 06 May 2025 00:29:36 +0000 (0:00:00.508) 0:00:15.071 *********** 2025-05-06 00:29:36.783463 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:29:36.807290 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:29:36.841142 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:29:36.862995 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:29:36.946245 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:29:36.947287 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:29:36.947647 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:29:36.948708 | orchestrator | 2025-05-06 00:29:36.949710 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-06 00:29:36.950541 | orchestrator | Tuesday 06 May 2025 00:29:36 +0000 (0:00:00.238) 0:00:15.310 *********** 2025-05-06 00:29:37.477273 | orchestrator | ok: [testbed-manager] 2025-05-06 00:29:37.477820 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:29:37.478672 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:29:37.480214 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:29:37.480894 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:29:37.482102 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:29:37.482743 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:29:37.483345 | orchestrator | 2025-05-06 00:29:37.483965 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-06 00:29:37.484494 | orchestrator | Tuesday 06 May 2025 00:29:37 +0000 (0:00:00.530) 0:00:15.840 *********** 2025-05-06 00:29:38.594751 | orchestrator | ok: [testbed-manager] 2025-05-06 00:29:38.595469 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:29:38.596342 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:29:38.596974 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:29:38.597887 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:29:38.598188 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:29:38.598717 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:29:38.599283 | orchestrator | 2025-05-06 00:29:38.599862 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-06 00:29:38.600628 | orchestrator | Tuesday 06 May 2025 00:29:38 +0000 (0:00:01.115) 0:00:16.956 *********** 2025-05-06 00:29:39.754301 | orchestrator | ok: [testbed-manager] 2025-05-06 00:29:39.754924 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:29:39.755039 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:29:39.755723 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:29:39.756986 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:39.757460 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:29:39.758237 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:39.759049 | orchestrator | 2025-05-06 00:29:39.759728 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-06 00:29:39.760587 | orchestrator | Tuesday 06 May 2025 00:29:39 +0000 (0:00:01.158) 0:00:18.114 *********** 2025-05-06 00:29:40.063567 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:29:40.063820 | orchestrator | 2025-05-06 00:29:40.064118 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-06 00:29:40.064909 | orchestrator | Tuesday 06 May 2025 00:29:40 +0000 (0:00:00.310) 0:00:18.425 *********** 2025-05-06 00:29:40.135379 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:29:41.486820 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:29:41.487807 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:29:41.488655 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:29:41.489539 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:29:41.489863 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:29:41.490688 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:29:41.491537 | orchestrator | 2025-05-06 00:29:41.491959 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-06 00:29:41.492315 | orchestrator | Tuesday 06 May 2025 00:29:41 +0000 (0:00:01.423) 0:00:19.849 *********** 2025-05-06 00:29:41.558881 | orchestrator | ok: [testbed-manager] 2025-05-06 00:29:41.578441 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:41.608988 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:41.643760 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:29:41.711159 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:29:41.711802 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:29:41.712640 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:29:41.714121 | orchestrator | 2025-05-06 00:29:41.715124 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-06 00:29:41.715916 | orchestrator | Tuesday 06 May 2025 00:29:41 +0000 (0:00:00.223) 0:00:20.072 *********** 2025-05-06 00:29:41.780402 | orchestrator | ok: [testbed-manager] 2025-05-06 00:29:41.827267 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:41.854308 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:41.910852 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:29:41.912829 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:29:41.913820 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:29:41.914572 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:29:41.915648 | orchestrator | 2025-05-06 00:29:41.916109 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-06 00:29:41.916782 | orchestrator | Tuesday 06 May 2025 00:29:41 +0000 (0:00:00.202) 0:00:20.275 *********** 2025-05-06 00:29:41.979358 | orchestrator | ok: [testbed-manager] 2025-05-06 00:29:42.004239 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:42.028886 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:42.052858 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:29:42.119872 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:29:42.120669 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:29:42.121561 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:29:42.122627 | orchestrator | 2025-05-06 00:29:42.124048 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-06 00:29:42.124751 | orchestrator | Tuesday 06 May 2025 00:29:42 +0000 (0:00:00.208) 0:00:20.483 *********** 2025-05-06 00:29:42.370063 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:29:42.370797 | orchestrator | 2025-05-06 00:29:42.371918 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-06 00:29:42.373132 | orchestrator | Tuesday 06 May 2025 00:29:42 +0000 (0:00:00.249) 0:00:20.733 *********** 2025-05-06 00:29:42.944955 | orchestrator | ok: [testbed-manager] 2025-05-06 00:29:42.946272 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:42.947022 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:42.947725 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:29:42.948683 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:29:42.949914 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:29:42.950211 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:29:42.951483 | orchestrator | 2025-05-06 00:29:42.952294 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-06 00:29:42.952775 | orchestrator | Tuesday 06 May 2025 00:29:42 +0000 (0:00:00.574) 0:00:21.307 *********** 2025-05-06 00:29:43.040989 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:29:43.071802 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:29:43.092784 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:29:43.164755 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:29:43.168487 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:29:43.168650 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:29:43.168667 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:29:43.168680 | orchestrator | 2025-05-06 00:29:43.168948 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-06 00:29:43.169775 | orchestrator | Tuesday 06 May 2025 00:29:43 +0000 (0:00:00.220) 0:00:21.528 *********** 2025-05-06 00:29:44.212889 | orchestrator | changed: [testbed-manager] 2025-05-06 00:29:44.213289 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:44.214347 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:29:44.215130 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:44.216079 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:29:44.216931 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:29:44.217669 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:29:44.218599 | orchestrator | 2025-05-06 00:29:44.219423 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-06 00:29:44.219892 | orchestrator | Tuesday 06 May 2025 00:29:44 +0000 (0:00:01.046) 0:00:22.575 *********** 2025-05-06 00:29:44.767645 | orchestrator | ok: [testbed-manager] 2025-05-06 00:29:44.767895 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:44.768233 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:29:44.769063 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:44.769883 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:29:44.770699 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:29:44.770995 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:29:44.771705 | orchestrator | 2025-05-06 00:29:44.772092 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-06 00:29:44.772588 | orchestrator | Tuesday 06 May 2025 00:29:44 +0000 (0:00:00.555) 0:00:23.131 *********** 2025-05-06 00:29:45.863295 | orchestrator | ok: [testbed-manager] 2025-05-06 00:29:45.864876 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:45.864998 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:29:45.865842 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:45.867474 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:29:45.867990 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:29:45.868620 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:29:45.869259 | orchestrator | 2025-05-06 00:29:45.869718 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-06 00:29:45.870361 | orchestrator | Tuesday 06 May 2025 00:29:45 +0000 (0:00:01.092) 0:00:24.223 *********** 2025-05-06 00:29:59.365066 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:59.366119 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:29:59.366162 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:59.366178 | orchestrator | changed: [testbed-manager] 2025-05-06 00:29:59.366193 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:29:59.366207 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:29:59.366230 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:29:59.367044 | orchestrator | 2025-05-06 00:29:59.367294 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-06 00:29:59.367827 | orchestrator | Tuesday 06 May 2025 00:29:59 +0000 (0:00:13.498) 0:00:37.722 *********** 2025-05-06 00:29:59.444467 | orchestrator | ok: [testbed-manager] 2025-05-06 00:29:59.466601 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:59.490693 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:59.516310 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:29:59.593573 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:29:59.594427 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:29:59.595573 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:29:59.596004 | orchestrator | 2025-05-06 00:29:59.596642 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-06 00:29:59.597330 | orchestrator | Tuesday 06 May 2025 00:29:59 +0000 (0:00:00.235) 0:00:37.957 *********** 2025-05-06 00:29:59.667332 | orchestrator | ok: [testbed-manager] 2025-05-06 00:29:59.693153 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:59.721534 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:59.746428 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:29:59.812140 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:29:59.813245 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:29:59.814198 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:29:59.815180 | orchestrator | 2025-05-06 00:29:59.815627 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-06 00:29:59.815673 | orchestrator | Tuesday 06 May 2025 00:29:59 +0000 (0:00:00.218) 0:00:38.175 *********** 2025-05-06 00:29:59.890467 | orchestrator | ok: [testbed-manager] 2025-05-06 00:29:59.914372 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:29:59.936964 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:29:59.965771 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:30:00.027163 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:30:00.028248 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:30:00.029948 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:30:00.030695 | orchestrator | 2025-05-06 00:30:00.031330 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-06 00:30:00.032039 | orchestrator | Tuesday 06 May 2025 00:30:00 +0000 (0:00:00.214) 0:00:38.390 *********** 2025-05-06 00:30:00.309267 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:30:00.310223 | orchestrator | 2025-05-06 00:30:00.310888 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-06 00:30:00.311406 | orchestrator | Tuesday 06 May 2025 00:30:00 +0000 (0:00:00.282) 0:00:38.673 *********** 2025-05-06 00:30:01.932845 | orchestrator | ok: [testbed-manager] 2025-05-06 00:30:01.933336 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:30:01.934457 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:30:01.936139 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:30:01.936173 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:30:01.937321 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:30:01.938455 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:30:01.939305 | orchestrator | 2025-05-06 00:30:01.939882 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-06 00:30:01.940641 | orchestrator | Tuesday 06 May 2025 00:30:01 +0000 (0:00:01.621) 0:00:40.294 *********** 2025-05-06 00:30:03.031407 | orchestrator | changed: [testbed-manager] 2025-05-06 00:30:03.031826 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:30:03.034743 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:30:03.035336 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:30:03.035387 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:30:03.035415 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:30:03.035636 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:30:03.035737 | orchestrator | 2025-05-06 00:30:03.036353 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-06 00:30:03.036878 | orchestrator | Tuesday 06 May 2025 00:30:03 +0000 (0:00:01.098) 0:00:41.393 *********** 2025-05-06 00:30:03.840461 | orchestrator | ok: [testbed-manager] 2025-05-06 00:30:03.840647 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:30:03.841255 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:30:03.841950 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:30:03.842661 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:30:03.843017 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:30:03.843658 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:30:03.844181 | orchestrator | 2025-05-06 00:30:03.844578 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-06 00:30:03.844950 | orchestrator | Tuesday 06 May 2025 00:30:03 +0000 (0:00:00.810) 0:00:42.203 *********** 2025-05-06 00:30:04.171810 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:30:04.172680 | orchestrator | 2025-05-06 00:30:04.172740 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-06 00:30:04.172772 | orchestrator | Tuesday 06 May 2025 00:30:04 +0000 (0:00:00.329) 0:00:42.533 *********** 2025-05-06 00:30:05.305421 | orchestrator | changed: [testbed-manager] 2025-05-06 00:30:05.306901 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:30:05.308219 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:30:05.309333 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:30:05.310594 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:30:05.312132 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:30:05.313794 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:30:05.314322 | orchestrator | 2025-05-06 00:30:05.315468 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-06 00:30:05.316185 | orchestrator | Tuesday 06 May 2025 00:30:05 +0000 (0:00:01.133) 0:00:43.666 *********** 2025-05-06 00:30:05.418182 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:30:05.440891 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:30:05.464059 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:30:05.613565 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:30:05.617266 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:30:05.617927 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:30:05.618010 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:30:05.618084 | orchestrator | 2025-05-06 00:30:05.618109 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-06 00:30:05.618943 | orchestrator | Tuesday 06 May 2025 00:30:05 +0000 (0:00:00.310) 0:00:43.976 *********** 2025-05-06 00:30:16.609854 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:30:16.610145 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:30:16.610179 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:30:16.610194 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:30:16.610208 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:30:16.610230 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:30:16.610968 | orchestrator | changed: [testbed-manager] 2025-05-06 00:30:16.611359 | orchestrator | 2025-05-06 00:30:16.612149 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-06 00:30:16.612864 | orchestrator | Tuesday 06 May 2025 00:30:16 +0000 (0:00:10.988) 0:00:54.965 *********** 2025-05-06 00:30:17.655264 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:30:17.655974 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:30:17.656021 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:30:17.656673 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:30:17.657607 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:30:17.658823 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:30:17.659715 | orchestrator | ok: [testbed-manager] 2025-05-06 00:30:17.660640 | orchestrator | 2025-05-06 00:30:17.661729 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-06 00:30:17.662527 | orchestrator | Tuesday 06 May 2025 00:30:17 +0000 (0:00:01.051) 0:00:56.017 *********** 2025-05-06 00:30:18.535977 | orchestrator | ok: [testbed-manager] 2025-05-06 00:30:18.536476 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:30:18.539170 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:30:18.539286 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:30:18.539311 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:30:18.539330 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:30:18.540533 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:30:18.541034 | orchestrator | 2025-05-06 00:30:18.541830 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-06 00:30:18.542485 | orchestrator | Tuesday 06 May 2025 00:30:18 +0000 (0:00:00.881) 0:00:56.899 *********** 2025-05-06 00:30:18.607212 | orchestrator | ok: [testbed-manager] 2025-05-06 00:30:18.633259 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:30:18.657933 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:30:18.692185 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:30:18.761755 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:30:18.762347 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:30:18.762745 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:30:18.764018 | orchestrator | 2025-05-06 00:30:18.764263 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-06 00:30:18.764697 | orchestrator | Tuesday 06 May 2025 00:30:18 +0000 (0:00:00.227) 0:00:57.126 *********** 2025-05-06 00:30:18.832225 | orchestrator | ok: [testbed-manager] 2025-05-06 00:30:18.856763 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:30:18.886114 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:30:18.906476 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:30:18.980633 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:30:18.981113 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:30:18.981343 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:30:18.982077 | orchestrator | 2025-05-06 00:30:18.984454 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-06 00:30:19.279747 | orchestrator | Tuesday 06 May 2025 00:30:18 +0000 (0:00:00.218) 0:00:57.345 *********** 2025-05-06 00:30:19.279883 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:30:19.280948 | orchestrator | 2025-05-06 00:30:19.282107 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-06 00:30:19.282992 | orchestrator | Tuesday 06 May 2025 00:30:19 +0000 (0:00:00.298) 0:00:57.643 *********** 2025-05-06 00:30:20.839802 | orchestrator | ok: [testbed-manager] 2025-05-06 00:30:20.840211 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:30:20.840878 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:30:20.840923 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:30:20.842429 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:30:20.843056 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:30:20.843490 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:30:20.844134 | orchestrator | 2025-05-06 00:30:20.844520 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-06 00:30:20.845092 | orchestrator | Tuesday 06 May 2025 00:30:20 +0000 (0:00:01.559) 0:00:59.203 *********** 2025-05-06 00:30:21.462940 | orchestrator | changed: [testbed-manager] 2025-05-06 00:30:21.464582 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:30:21.465375 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:30:21.466759 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:30:21.467568 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:30:21.469007 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:30:21.469823 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:30:21.470483 | orchestrator | 2025-05-06 00:30:21.471511 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-06 00:30:21.546553 | orchestrator | Tuesday 06 May 2025 00:30:21 +0000 (0:00:00.621) 0:00:59.824 *********** 2025-05-06 00:30:21.546662 | orchestrator | ok: [testbed-manager] 2025-05-06 00:30:21.579566 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:30:21.601527 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:30:21.627834 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:30:21.699759 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:30:21.699903 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:30:21.701225 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:30:21.701464 | orchestrator | 2025-05-06 00:30:21.703321 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-06 00:30:21.704020 | orchestrator | Tuesday 06 May 2025 00:30:21 +0000 (0:00:00.238) 0:01:00.062 *********** 2025-05-06 00:30:22.789999 | orchestrator | ok: [testbed-manager] 2025-05-06 00:30:22.792093 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:30:22.794113 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:30:22.794720 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:30:22.795669 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:30:22.797311 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:30:22.797737 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:30:22.798276 | orchestrator | 2025-05-06 00:30:22.798740 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-06 00:30:22.799304 | orchestrator | Tuesday 06 May 2025 00:30:22 +0000 (0:00:01.084) 0:01:01.146 *********** 2025-05-06 00:30:24.384264 | orchestrator | changed: [testbed-manager] 2025-05-06 00:30:24.384699 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:30:24.386126 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:30:24.387454 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:30:24.389172 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:30:24.390001 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:30:24.391221 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:30:24.392029 | orchestrator | 2025-05-06 00:30:24.392970 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-06 00:30:24.393701 | orchestrator | Tuesday 06 May 2025 00:30:24 +0000 (0:00:01.598) 0:01:02.745 *********** 2025-05-06 00:30:26.642828 | orchestrator | ok: [testbed-manager] 2025-05-06 00:30:26.644074 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:30:26.644164 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:30:26.644794 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:30:26.645750 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:30:26.646790 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:30:26.647629 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:30:26.648306 | orchestrator | 2025-05-06 00:30:26.649052 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-06 00:30:26.649562 | orchestrator | Tuesday 06 May 2025 00:30:26 +0000 (0:00:02.256) 0:01:05.002 *********** 2025-05-06 00:31:03.280868 | orchestrator | ok: [testbed-manager] 2025-05-06 00:31:03.281586 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:31:03.281625 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:31:03.281642 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:31:03.281666 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:31:03.282011 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:31:03.283271 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:31:03.283829 | orchestrator | 2025-05-06 00:31:03.284447 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-06 00:31:03.284937 | orchestrator | Tuesday 06 May 2025 00:31:03 +0000 (0:00:36.630) 0:01:41.633 *********** 2025-05-06 00:32:26.714192 | orchestrator | changed: [testbed-manager] 2025-05-06 00:32:26.715368 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:32:26.715404 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:32:26.715441 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:32:26.715455 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:32:26.715526 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:32:26.716050 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:32:26.717071 | orchestrator | 2025-05-06 00:32:26.717583 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-06 00:32:26.718173 | orchestrator | Tuesday 06 May 2025 00:32:26 +0000 (0:01:23.432) 0:03:05.066 *********** 2025-05-06 00:32:28.428510 | orchestrator | ok: [testbed-manager] 2025-05-06 00:32:28.428689 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:32:28.428714 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:32:28.428735 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:32:28.430144 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:32:28.430339 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:32:28.431096 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:32:28.431295 | orchestrator | 2025-05-06 00:32:28.431825 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-06 00:32:28.432352 | orchestrator | Tuesday 06 May 2025 00:32:28 +0000 (0:00:01.719) 0:03:06.786 *********** 2025-05-06 00:32:40.215297 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:32:40.215556 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:32:40.215587 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:32:40.215610 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:32:40.216310 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:32:40.217245 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:32:40.218069 | orchestrator | changed: [testbed-manager] 2025-05-06 00:32:40.218796 | orchestrator | 2025-05-06 00:32:40.219421 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-06 00:32:40.220130 | orchestrator | Tuesday 06 May 2025 00:32:40 +0000 (0:00:11.783) 0:03:18.569 *********** 2025-05-06 00:32:40.544969 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-06 00:32:40.545515 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-06 00:32:40.546708 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-06 00:32:40.546969 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-06 00:32:40.550504 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-06 00:32:40.599626 | orchestrator | 2025-05-06 00:32:40.599693 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-06 00:32:40.599722 | orchestrator | Tuesday 06 May 2025 00:32:40 +0000 (0:00:00.339) 0:03:18.908 *********** 2025-05-06 00:32:40.599746 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-06 00:32:40.630550 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-06 00:32:40.630996 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:32:40.631792 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-06 00:32:40.662406 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:32:40.696866 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-06 00:32:40.698213 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:32:40.726096 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:32:41.217680 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-06 00:32:41.217976 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-06 00:32:41.218422 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-06 00:32:41.219819 | orchestrator | 2025-05-06 00:32:41.220207 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-06 00:32:41.222268 | orchestrator | Tuesday 06 May 2025 00:32:41 +0000 (0:00:00.672) 0:03:19.581 *********** 2025-05-06 00:32:41.274926 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-06 00:32:41.278569 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-06 00:32:41.279400 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-06 00:32:41.328950 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-06 00:32:41.329443 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-06 00:32:41.330542 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-06 00:32:41.332176 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-06 00:32:41.332867 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-06 00:32:41.333839 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-06 00:32:41.336502 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-06 00:32:41.337518 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-06 00:32:41.337838 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-06 00:32:41.338361 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-06 00:32:41.339143 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-06 00:32:41.339692 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-06 00:32:41.340129 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-06 00:32:41.343341 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-06 00:32:41.344055 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-06 00:32:41.344673 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-06 00:32:41.344996 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-06 00:32:41.345897 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-06 00:32:41.346311 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-06 00:32:41.348244 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-06 00:32:41.348707 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-06 00:32:41.349138 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-06 00:32:41.357389 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:32:41.358715 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-06 00:32:41.359515 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-06 00:32:41.360157 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-06 00:32:41.399974 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:32:41.400633 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-06 00:32:41.401518 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-06 00:32:41.402153 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-06 00:32:41.402980 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-06 00:32:41.405525 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-06 00:32:41.406134 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-06 00:32:41.406456 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-06 00:32:41.406514 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-06 00:32:41.406822 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-06 00:32:41.407164 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-06 00:32:41.407610 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-06 00:32:41.407918 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-06 00:32:41.425842 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:32:44.974670 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:32:44.977647 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-06 00:32:44.978461 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-06 00:32:44.978562 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-06 00:32:44.980073 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-06 00:32:44.980928 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-06 00:32:44.982729 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-06 00:32:44.983414 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-06 00:32:44.984714 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-06 00:32:44.986379 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-06 00:32:44.987515 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-06 00:32:44.988651 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-06 00:32:44.991363 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-06 00:32:44.991504 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-06 00:32:44.991525 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-06 00:32:44.991542 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-06 00:32:44.992564 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-06 00:32:44.993018 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-06 00:32:44.994203 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-06 00:32:44.994445 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-06 00:32:44.994923 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-06 00:32:44.995514 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-06 00:32:44.996423 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-06 00:32:44.996556 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-06 00:32:44.997154 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-06 00:32:44.997619 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-06 00:32:45.000273 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-06 00:32:45.003942 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-06 00:32:45.003990 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-06 00:32:45.004004 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-06 00:32:45.004024 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-06 00:32:45.007096 | orchestrator | 2025-05-06 00:32:45.555828 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-06 00:32:45.555965 | orchestrator | Tuesday 06 May 2025 00:32:44 +0000 (0:00:03.755) 0:03:23.337 *********** 2025-05-06 00:32:45.556004 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-06 00:32:45.556566 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-06 00:32:45.558216 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-06 00:32:45.559203 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-06 00:32:45.560533 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-06 00:32:45.561200 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-06 00:32:45.562282 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-06 00:32:45.563057 | orchestrator | 2025-05-06 00:32:45.563735 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-06 00:32:45.564097 | orchestrator | Tuesday 06 May 2025 00:32:45 +0000 (0:00:00.582) 0:03:23.919 *********** 2025-05-06 00:32:45.607827 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-06 00:32:45.635810 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:32:45.712775 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-06 00:32:46.029001 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:32:46.029992 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-06 00:32:46.031284 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:32:46.032560 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-06 00:32:46.033998 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:32:46.035409 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-06 00:32:46.037088 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-06 00:32:46.037580 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-06 00:32:46.038772 | orchestrator | 2025-05-06 00:32:46.039101 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-06 00:32:46.039959 | orchestrator | Tuesday 06 May 2025 00:32:46 +0000 (0:00:00.472) 0:03:24.391 *********** 2025-05-06 00:32:46.078536 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-06 00:32:46.106285 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:32:46.178460 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-06 00:32:46.592115 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:32:46.593305 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-06 00:32:46.593469 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:32:46.594432 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-06 00:32:46.597148 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:32:46.597451 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-06 00:32:46.597497 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-06 00:32:46.597525 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-06 00:32:46.597557 | orchestrator | 2025-05-06 00:32:46.597918 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-06 00:32:46.598570 | orchestrator | Tuesday 06 May 2025 00:32:46 +0000 (0:00:00.564) 0:03:24.956 *********** 2025-05-06 00:32:46.671435 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:32:46.705870 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:32:46.733117 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:32:46.756235 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:32:46.881127 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:32:46.882410 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:32:46.884253 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:32:46.887823 | orchestrator | 2025-05-06 00:32:52.625039 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-06 00:32:52.625180 | orchestrator | Tuesday 06 May 2025 00:32:46 +0000 (0:00:00.288) 0:03:25.244 *********** 2025-05-06 00:32:52.625217 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:32:52.626671 | orchestrator | ok: [testbed-manager] 2025-05-06 00:32:52.626712 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:32:52.628289 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:32:52.628878 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:32:52.628905 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:32:52.629729 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:32:52.630189 | orchestrator | 2025-05-06 00:32:52.631067 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-06 00:32:52.631668 | orchestrator | Tuesday 06 May 2025 00:32:52 +0000 (0:00:05.741) 0:03:30.985 *********** 2025-05-06 00:32:52.722913 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-06 00:32:52.723579 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-06 00:32:52.758156 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:32:52.798384 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-06 00:32:52.798546 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:32:52.799186 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-06 00:32:52.830654 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:32:52.870904 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:32:52.871052 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-06 00:32:52.873968 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-06 00:32:52.934346 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:32:52.935105 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:32:52.935939 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-06 00:32:52.937046 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:32:52.937836 | orchestrator | 2025-05-06 00:32:52.938392 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-06 00:32:52.938685 | orchestrator | Tuesday 06 May 2025 00:32:52 +0000 (0:00:00.311) 0:03:31.297 *********** 2025-05-06 00:32:54.017776 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-06 00:32:54.018696 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-06 00:32:54.019751 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-06 00:32:54.023552 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-06 00:32:54.024568 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-06 00:32:54.024994 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-06 00:32:54.025467 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-06 00:32:54.026189 | orchestrator | 2025-05-06 00:32:54.026692 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-06 00:32:54.027343 | orchestrator | Tuesday 06 May 2025 00:32:54 +0000 (0:00:01.081) 0:03:32.378 *********** 2025-05-06 00:32:54.607072 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:32:54.607256 | orchestrator | 2025-05-06 00:32:54.611709 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-06 00:32:54.611886 | orchestrator | Tuesday 06 May 2025 00:32:54 +0000 (0:00:00.589) 0:03:32.968 *********** 2025-05-06 00:32:55.786970 | orchestrator | ok: [testbed-manager] 2025-05-06 00:32:55.788573 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:32:55.788680 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:32:55.790118 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:32:55.790469 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:32:55.791435 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:32:55.791866 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:32:55.792979 | orchestrator | 2025-05-06 00:32:55.793729 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-06 00:32:55.795532 | orchestrator | Tuesday 06 May 2025 00:32:55 +0000 (0:00:01.180) 0:03:34.148 *********** 2025-05-06 00:32:56.419960 | orchestrator | ok: [testbed-manager] 2025-05-06 00:32:56.420792 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:32:56.422119 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:32:56.423235 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:32:56.424386 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:32:56.425165 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:32:56.426123 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:32:56.427182 | orchestrator | 2025-05-06 00:32:56.428049 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-06 00:32:56.429259 | orchestrator | Tuesday 06 May 2025 00:32:56 +0000 (0:00:00.630) 0:03:34.779 *********** 2025-05-06 00:32:57.079073 | orchestrator | changed: [testbed-manager] 2025-05-06 00:32:57.079256 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:32:57.079358 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:32:57.080230 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:32:57.080999 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:32:57.081888 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:32:57.082333 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:32:57.085345 | orchestrator | 2025-05-06 00:32:57.086094 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-06 00:32:57.086135 | orchestrator | Tuesday 06 May 2025 00:32:57 +0000 (0:00:00.663) 0:03:35.442 *********** 2025-05-06 00:32:57.730892 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:32:57.731549 | orchestrator | ok: [testbed-manager] 2025-05-06 00:32:57.731596 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:32:57.732326 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:32:57.733252 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:32:57.734203 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:32:57.734930 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:32:57.734962 | orchestrator | 2025-05-06 00:32:57.735969 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-06 00:32:57.736659 | orchestrator | Tuesday 06 May 2025 00:32:57 +0000 (0:00:00.649) 0:03:36.092 *********** 2025-05-06 00:32:58.731261 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746489855.787716, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 00:32:58.731783 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746489930.3402169, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 00:32:58.733553 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746489912.8151293, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 00:32:58.736715 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746489882.4366527, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 00:32:58.737552 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746489881.0218954, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 00:32:58.738163 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746489920.971748, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 00:32:58.738822 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1746489882.2482216, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 00:32:58.739527 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746489931.4195414, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 00:32:58.740263 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746489806.0132015, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 00:32:58.741011 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746489838.41614, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 00:32:58.741302 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746489810.0956132, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 00:32:58.741848 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746489858.4887235, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 00:32:58.742421 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746489848.352699, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 00:32:58.742634 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1746489803.9539697, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 00:32:58.743217 | orchestrator | 2025-05-06 00:32:58.743621 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-06 00:32:58.743929 | orchestrator | Tuesday 06 May 2025 00:32:58 +0000 (0:00:00.999) 0:03:37.092 *********** 2025-05-06 00:32:59.848635 | orchestrator | changed: [testbed-manager] 2025-05-06 00:32:59.848861 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:32:59.851198 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:32:59.853303 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:32:59.853517 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:32:59.853548 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:32:59.854704 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:32:59.855146 | orchestrator | 2025-05-06 00:32:59.855624 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-06 00:32:59.856543 | orchestrator | Tuesday 06 May 2025 00:32:59 +0000 (0:00:01.119) 0:03:38.211 *********** 2025-05-06 00:33:00.986109 | orchestrator | changed: [testbed-manager] 2025-05-06 00:33:00.987425 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:33:00.988138 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:33:00.989122 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:33:00.989399 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:33:00.990981 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:33:00.991732 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:33:00.992657 | orchestrator | 2025-05-06 00:33:00.993983 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-06 00:33:00.995008 | orchestrator | Tuesday 06 May 2025 00:33:00 +0000 (0:00:01.136) 0:03:39.347 *********** 2025-05-06 00:33:01.081242 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:33:01.113748 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:33:01.185620 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:33:01.217868 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:33:01.272360 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:33:01.273844 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:33:01.274656 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:33:01.275303 | orchestrator | 2025-05-06 00:33:01.275973 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-06 00:33:01.276851 | orchestrator | Tuesday 06 May 2025 00:33:01 +0000 (0:00:00.289) 0:03:39.637 *********** 2025-05-06 00:33:02.028536 | orchestrator | ok: [testbed-manager] 2025-05-06 00:33:02.028910 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:33:02.037689 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:33:02.037865 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:33:02.037907 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:33:02.037933 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:33:02.037966 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:33:02.038222 | orchestrator | 2025-05-06 00:33:02.038950 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-06 00:33:02.039214 | orchestrator | Tuesday 06 May 2025 00:33:02 +0000 (0:00:00.753) 0:03:40.390 *********** 2025-05-06 00:33:02.460562 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:33:02.461140 | orchestrator | 2025-05-06 00:33:02.461185 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-06 00:33:02.461920 | orchestrator | Tuesday 06 May 2025 00:33:02 +0000 (0:00:00.430) 0:03:40.821 *********** 2025-05-06 00:33:10.525776 | orchestrator | ok: [testbed-manager] 2025-05-06 00:33:10.528550 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:33:10.529579 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:33:10.529627 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:33:10.530779 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:33:10.531644 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:33:10.532637 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:33:10.533599 | orchestrator | 2025-05-06 00:33:10.534308 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-06 00:33:10.534977 | orchestrator | Tuesday 06 May 2025 00:33:10 +0000 (0:00:08.066) 0:03:48.888 *********** 2025-05-06 00:33:11.877352 | orchestrator | ok: [testbed-manager] 2025-05-06 00:33:11.877757 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:33:11.878534 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:33:11.879587 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:33:11.880546 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:33:11.881586 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:33:11.882346 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:33:11.883140 | orchestrator | 2025-05-06 00:33:11.884254 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-06 00:33:11.884444 | orchestrator | Tuesday 06 May 2025 00:33:11 +0000 (0:00:01.349) 0:03:50.237 *********** 2025-05-06 00:33:12.868319 | orchestrator | ok: [testbed-manager] 2025-05-06 00:33:12.872225 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:33:12.872321 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:33:12.872342 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:33:12.873986 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:33:12.875116 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:33:12.876054 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:33:12.876992 | orchestrator | 2025-05-06 00:33:12.878189 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-06 00:33:12.878685 | orchestrator | Tuesday 06 May 2025 00:33:12 +0000 (0:00:00.991) 0:03:51.229 *********** 2025-05-06 00:33:13.253008 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:33:13.253717 | orchestrator | 2025-05-06 00:33:13.255157 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-06 00:33:13.257122 | orchestrator | Tuesday 06 May 2025 00:33:13 +0000 (0:00:00.385) 0:03:51.615 *********** 2025-05-06 00:33:21.752755 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:33:21.753063 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:33:21.753790 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:33:21.754568 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:33:21.754977 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:33:21.755441 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:33:21.756191 | orchestrator | changed: [testbed-manager] 2025-05-06 00:33:21.756933 | orchestrator | 2025-05-06 00:33:21.758588 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-06 00:33:22.413973 | orchestrator | Tuesday 06 May 2025 00:33:21 +0000 (0:00:08.498) 0:04:00.114 *********** 2025-05-06 00:33:22.414228 | orchestrator | changed: [testbed-manager] 2025-05-06 00:33:22.414357 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:33:22.414404 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:33:22.415280 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:33:22.415622 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:33:22.416673 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:33:22.418430 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:33:22.418789 | orchestrator | 2025-05-06 00:33:22.418820 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-06 00:33:22.418844 | orchestrator | Tuesday 06 May 2025 00:33:22 +0000 (0:00:00.662) 0:04:00.776 *********** 2025-05-06 00:33:23.548581 | orchestrator | changed: [testbed-manager] 2025-05-06 00:33:23.549137 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:33:23.552587 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:33:23.552958 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:33:23.553206 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:33:23.553701 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:33:23.554337 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:33:23.554823 | orchestrator | 2025-05-06 00:33:23.555721 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-06 00:33:23.556884 | orchestrator | Tuesday 06 May 2025 00:33:23 +0000 (0:00:01.134) 0:04:01.911 *********** 2025-05-06 00:33:24.533211 | orchestrator | changed: [testbed-manager] 2025-05-06 00:33:24.533622 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:33:24.536151 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:33:24.536650 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:33:24.536686 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:33:24.537270 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:33:24.537671 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:33:24.538502 | orchestrator | 2025-05-06 00:33:24.538999 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-06 00:33:24.539548 | orchestrator | Tuesday 06 May 2025 00:33:24 +0000 (0:00:00.984) 0:04:02.895 *********** 2025-05-06 00:33:24.625985 | orchestrator | ok: [testbed-manager] 2025-05-06 00:33:24.661629 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:33:24.702710 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:33:24.784375 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:33:24.866642 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:33:24.868495 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:33:24.869378 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:33:24.871354 | orchestrator | 2025-05-06 00:33:24.872255 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-06 00:33:24.872963 | orchestrator | Tuesday 06 May 2025 00:33:24 +0000 (0:00:00.334) 0:04:03.230 *********** 2025-05-06 00:33:24.970593 | orchestrator | ok: [testbed-manager] 2025-05-06 00:33:25.009595 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:33:25.042136 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:33:25.101002 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:33:25.182639 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:33:25.182862 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:33:25.183392 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:33:25.183870 | orchestrator | 2025-05-06 00:33:25.188063 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-06 00:33:25.188402 | orchestrator | Tuesday 06 May 2025 00:33:25 +0000 (0:00:00.316) 0:04:03.546 *********** 2025-05-06 00:33:25.288633 | orchestrator | ok: [testbed-manager] 2025-05-06 00:33:25.329332 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:33:25.369054 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:33:25.410862 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:33:25.468659 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:33:25.469250 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:33:25.469569 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:33:25.470352 | orchestrator | 2025-05-06 00:33:25.471751 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-06 00:33:25.471938 | orchestrator | Tuesday 06 May 2025 00:33:25 +0000 (0:00:00.287) 0:04:03.833 *********** 2025-05-06 00:33:31.219761 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:33:31.220018 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:33:31.220399 | orchestrator | ok: [testbed-manager] 2025-05-06 00:33:31.221550 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:33:31.222356 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:33:31.223074 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:33:31.223673 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:33:31.224421 | orchestrator | 2025-05-06 00:33:31.225197 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-06 00:33:31.225976 | orchestrator | Tuesday 06 May 2025 00:33:31 +0000 (0:00:05.747) 0:04:09.581 *********** 2025-05-06 00:33:31.651710 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:33:31.651896 | orchestrator | 2025-05-06 00:33:31.652702 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-06 00:33:31.653818 | orchestrator | Tuesday 06 May 2025 00:33:31 +0000 (0:00:00.432) 0:04:10.013 *********** 2025-05-06 00:33:31.728348 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-06 00:33:31.785412 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-06 00:33:31.785609 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-06 00:33:31.786356 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:33:31.786598 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-06 00:33:31.787947 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-06 00:33:31.831657 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-06 00:33:31.832265 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:33:31.832398 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-06 00:33:31.833404 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-06 00:33:31.868612 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:33:31.930968 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:33:31.931135 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-06 00:33:31.932273 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-06 00:33:31.934225 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-06 00:33:31.934311 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-06 00:33:32.012741 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:33:32.013862 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:33:32.014920 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-06 00:33:32.015749 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-06 00:33:32.016607 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:33:32.017033 | orchestrator | 2025-05-06 00:33:32.017502 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-06 00:33:32.018070 | orchestrator | Tuesday 06 May 2025 00:33:32 +0000 (0:00:00.363) 0:04:10.377 *********** 2025-05-06 00:33:32.425618 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:33:32.426104 | orchestrator | 2025-05-06 00:33:32.426627 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-06 00:33:32.428028 | orchestrator | Tuesday 06 May 2025 00:33:32 +0000 (0:00:00.412) 0:04:10.790 *********** 2025-05-06 00:33:32.496743 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-06 00:33:32.537980 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:33:32.538257 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-06 00:33:32.576653 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-06 00:33:32.576889 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:33:32.628126 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:33:32.628899 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-06 00:33:32.666678 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-06 00:33:32.667098 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:33:32.743970 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:33:32.744190 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-06 00:33:32.745314 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:33:32.746111 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-06 00:33:32.746585 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:33:32.747105 | orchestrator | 2025-05-06 00:33:32.747828 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-06 00:33:32.748496 | orchestrator | Tuesday 06 May 2025 00:33:32 +0000 (0:00:00.318) 0:04:11.108 *********** 2025-05-06 00:33:33.179331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:33:33.179805 | orchestrator | 2025-05-06 00:33:33.180500 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-06 00:33:33.181364 | orchestrator | Tuesday 06 May 2025 00:33:33 +0000 (0:00:00.435) 0:04:11.543 *********** 2025-05-06 00:34:07.558708 | orchestrator | changed: [testbed-manager] 2025-05-06 00:34:07.560340 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:34:07.560389 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:34:07.560404 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:34:07.560428 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:34:07.562245 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:34:07.562275 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:34:07.562297 | orchestrator | 2025-05-06 00:34:07.563518 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-06 00:34:07.564194 | orchestrator | Tuesday 06 May 2025 00:34:07 +0000 (0:00:34.372) 0:04:45.915 *********** 2025-05-06 00:34:15.554523 | orchestrator | changed: [testbed-manager] 2025-05-06 00:34:15.556003 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:34:15.556932 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:34:15.556964 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:34:15.556986 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:34:15.557542 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:34:15.559297 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:34:15.561156 | orchestrator | 2025-05-06 00:34:15.562062 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-06 00:34:15.562796 | orchestrator | Tuesday 06 May 2025 00:34:15 +0000 (0:00:08.000) 0:04:53.916 *********** 2025-05-06 00:34:23.237176 | orchestrator | changed: [testbed-manager] 2025-05-06 00:34:23.237407 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:34:23.237433 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:34:23.237455 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:34:23.239082 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:34:23.239314 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:34:23.239340 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:34:23.239359 | orchestrator | 2025-05-06 00:34:23.240279 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-06 00:34:23.240720 | orchestrator | Tuesday 06 May 2025 00:34:23 +0000 (0:00:07.681) 0:05:01.597 *********** 2025-05-06 00:34:24.827398 | orchestrator | ok: [testbed-manager] 2025-05-06 00:34:24.827666 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:34:24.827700 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:34:24.829150 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:34:24.830234 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:34:24.830962 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:34:24.831825 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:34:24.832711 | orchestrator | 2025-05-06 00:34:24.833361 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-06 00:34:24.834171 | orchestrator | Tuesday 06 May 2025 00:34:24 +0000 (0:00:01.590) 0:05:03.188 *********** 2025-05-06 00:34:30.499195 | orchestrator | changed: [testbed-manager] 2025-05-06 00:34:30.500092 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:34:30.500208 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:34:30.501636 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:34:30.502604 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:34:30.503719 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:34:30.504101 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:34:30.504737 | orchestrator | 2025-05-06 00:34:30.505452 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-06 00:34:30.505772 | orchestrator | Tuesday 06 May 2025 00:34:30 +0000 (0:00:05.673) 0:05:08.861 *********** 2025-05-06 00:34:30.913433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:34:30.913937 | orchestrator | 2025-05-06 00:34:30.914642 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-06 00:34:30.915545 | orchestrator | Tuesday 06 May 2025 00:34:30 +0000 (0:00:00.415) 0:05:09.277 *********** 2025-05-06 00:34:31.621967 | orchestrator | changed: [testbed-manager] 2025-05-06 00:34:31.622587 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:34:31.624166 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:34:31.625081 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:34:31.625887 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:34:31.626738 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:34:31.627679 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:34:31.628439 | orchestrator | 2025-05-06 00:34:31.629547 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-06 00:34:31.632101 | orchestrator | Tuesday 06 May 2025 00:34:31 +0000 (0:00:00.706) 0:05:09.983 *********** 2025-05-06 00:34:33.161950 | orchestrator | ok: [testbed-manager] 2025-05-06 00:34:33.163017 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:34:33.163066 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:34:33.164167 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:34:33.164792 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:34:33.165602 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:34:33.166489 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:34:33.167157 | orchestrator | 2025-05-06 00:34:33.167687 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-06 00:34:33.168209 | orchestrator | Tuesday 06 May 2025 00:34:33 +0000 (0:00:01.539) 0:05:11.523 *********** 2025-05-06 00:34:33.920289 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:34:33.920546 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:34:33.920580 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:34:33.920867 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:34:33.921099 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:34:33.921800 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:34:33.921997 | orchestrator | changed: [testbed-manager] 2025-05-06 00:34:33.922206 | orchestrator | 2025-05-06 00:34:33.922886 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-06 00:34:33.923155 | orchestrator | Tuesday 06 May 2025 00:34:33 +0000 (0:00:00.759) 0:05:12.282 *********** 2025-05-06 00:34:33.982748 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:34:34.043997 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:34:34.077297 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:34:34.122393 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:34:34.199751 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:34:34.199920 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:34:34.200534 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:34:34.201434 | orchestrator | 2025-05-06 00:34:34.201778 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-06 00:34:34.202411 | orchestrator | Tuesday 06 May 2025 00:34:34 +0000 (0:00:00.280) 0:05:12.563 *********** 2025-05-06 00:34:34.306354 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:34:34.338702 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:34:34.371745 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:34:34.402984 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:34:34.592019 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:34:34.592281 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:34:34.592644 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:34:34.594366 | orchestrator | 2025-05-06 00:34:34.594925 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-06 00:34:34.595867 | orchestrator | Tuesday 06 May 2025 00:34:34 +0000 (0:00:00.389) 0:05:12.953 *********** 2025-05-06 00:34:34.680945 | orchestrator | ok: [testbed-manager] 2025-05-06 00:34:34.749935 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:34:34.783873 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:34:34.820298 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:34:34.898376 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:34:34.899123 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:34:34.900955 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:34:34.901687 | orchestrator | 2025-05-06 00:34:34.902787 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-06 00:34:34.903534 | orchestrator | Tuesday 06 May 2025 00:34:34 +0000 (0:00:00.309) 0:05:13.262 *********** 2025-05-06 00:34:35.008301 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:34:35.044822 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:34:35.093862 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:34:35.137541 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:34:35.213945 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:34:35.223110 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:34:35.225606 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:34:35.225828 | orchestrator | 2025-05-06 00:34:35.227257 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-06 00:34:35.228231 | orchestrator | Tuesday 06 May 2025 00:34:35 +0000 (0:00:00.311) 0:05:13.573 *********** 2025-05-06 00:34:35.335873 | orchestrator | ok: [testbed-manager] 2025-05-06 00:34:35.369610 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:34:35.417986 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:34:35.447944 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:34:35.514800 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:34:35.515803 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:34:35.516847 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:34:35.517687 | orchestrator | 2025-05-06 00:34:35.519005 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-06 00:34:35.519421 | orchestrator | Tuesday 06 May 2025 00:34:35 +0000 (0:00:00.305) 0:05:13.879 *********** 2025-05-06 00:34:35.613394 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:34:35.643381 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:34:35.675251 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:34:35.712753 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:34:35.743813 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:34:35.807850 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:34:35.809384 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:34:35.810168 | orchestrator | 2025-05-06 00:34:35.811199 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-06 00:34:35.812129 | orchestrator | Tuesday 06 May 2025 00:34:35 +0000 (0:00:00.292) 0:05:14.172 *********** 2025-05-06 00:34:35.899723 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:34:35.930664 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:34:35.960301 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:34:36.012679 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:34:36.147949 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:34:36.148602 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:34:36.150301 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:34:36.150932 | orchestrator | 2025-05-06 00:34:36.151890 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-06 00:34:36.152351 | orchestrator | Tuesday 06 May 2025 00:34:36 +0000 (0:00:00.339) 0:05:14.511 *********** 2025-05-06 00:34:36.556983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:34:36.558176 | orchestrator | 2025-05-06 00:34:36.558811 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-06 00:34:36.560159 | orchestrator | Tuesday 06 May 2025 00:34:36 +0000 (0:00:00.408) 0:05:14.919 *********** 2025-05-06 00:34:37.385359 | orchestrator | ok: [testbed-manager] 2025-05-06 00:34:37.385633 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:34:37.385672 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:34:37.386674 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:34:37.388054 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:34:37.389283 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:34:37.391344 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:34:37.391662 | orchestrator | 2025-05-06 00:34:37.391697 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-06 00:34:37.392088 | orchestrator | Tuesday 06 May 2025 00:34:37 +0000 (0:00:00.821) 0:05:15.740 *********** 2025-05-06 00:34:40.034913 | orchestrator | ok: [testbed-manager] 2025-05-06 00:34:40.035900 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:34:40.036248 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:34:40.036287 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:34:40.037496 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:34:40.041791 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:34:40.042824 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:34:40.044539 | orchestrator | 2025-05-06 00:34:40.044585 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-06 00:34:40.046694 | orchestrator | Tuesday 06 May 2025 00:34:40 +0000 (0:00:02.657) 0:05:18.397 *********** 2025-05-06 00:34:40.116877 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-06 00:34:40.118096 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-06 00:34:40.119116 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-06 00:34:40.208154 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:34:40.208393 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-06 00:34:40.208736 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-06 00:34:40.208773 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-06 00:34:40.297494 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:34:40.298602 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-06 00:34:40.300339 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-06 00:34:40.301210 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-06 00:34:40.367439 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:34:40.368451 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-06 00:34:40.369796 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-06 00:34:40.370539 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-06 00:34:40.449456 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:34:40.450366 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-06 00:34:40.451305 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-06 00:34:40.451743 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-06 00:34:40.525696 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:34:40.526131 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-06 00:34:40.526170 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-06 00:34:40.526193 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-06 00:34:40.669306 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:34:40.669708 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-06 00:34:40.670796 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-06 00:34:40.671587 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-06 00:34:40.672400 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:34:40.673204 | orchestrator | 2025-05-06 00:34:40.673418 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-06 00:34:40.673995 | orchestrator | Tuesday 06 May 2025 00:34:40 +0000 (0:00:00.633) 0:05:19.031 *********** 2025-05-06 00:34:47.270176 | orchestrator | ok: [testbed-manager] 2025-05-06 00:34:47.270316 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:34:47.270332 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:34:47.270341 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:34:47.270350 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:34:47.270363 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:34:47.270496 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:34:47.270947 | orchestrator | 2025-05-06 00:34:47.271276 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-06 00:34:47.271575 | orchestrator | Tuesday 06 May 2025 00:34:47 +0000 (0:00:06.595) 0:05:25.626 *********** 2025-05-06 00:34:48.341335 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:34:48.341569 | orchestrator | ok: [testbed-manager] 2025-05-06 00:34:48.342141 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:34:48.345091 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:34:48.345365 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:34:48.346544 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:34:48.346717 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:34:48.347833 | orchestrator | 2025-05-06 00:34:48.348124 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-06 00:34:48.348958 | orchestrator | Tuesday 06 May 2025 00:34:48 +0000 (0:00:01.077) 0:05:26.704 *********** 2025-05-06 00:34:55.665963 | orchestrator | ok: [testbed-manager] 2025-05-06 00:34:55.666255 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:34:55.666787 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:34:55.670238 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:34:55.670987 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:34:55.671028 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:34:55.671054 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:34:55.671695 | orchestrator | 2025-05-06 00:34:55.672212 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-06 00:34:55.672967 | orchestrator | Tuesday 06 May 2025 00:34:55 +0000 (0:00:07.322) 0:05:34.026 *********** 2025-05-06 00:34:58.776572 | orchestrator | changed: [testbed-manager] 2025-05-06 00:34:58.777271 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:34:58.778166 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:34:58.780068 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:34:58.780933 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:34:58.781892 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:34:58.782342 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:34:58.783181 | orchestrator | 2025-05-06 00:34:58.783649 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-06 00:34:58.784371 | orchestrator | Tuesday 06 May 2025 00:34:58 +0000 (0:00:03.111) 0:05:37.138 *********** 2025-05-06 00:35:00.175516 | orchestrator | ok: [testbed-manager] 2025-05-06 00:35:00.176785 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:35:00.178968 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:35:00.179250 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:35:00.179762 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:35:00.180805 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:35:00.181911 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:35:00.182304 | orchestrator | 2025-05-06 00:35:00.183043 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-06 00:35:00.184149 | orchestrator | Tuesday 06 May 2025 00:35:00 +0000 (0:00:01.399) 0:05:38.537 *********** 2025-05-06 00:35:01.512960 | orchestrator | ok: [testbed-manager] 2025-05-06 00:35:01.516775 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:35:01.517328 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:35:01.517371 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:35:01.518422 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:35:01.519534 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:35:01.520789 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:35:01.522744 | orchestrator | 2025-05-06 00:35:01.714008 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-06 00:35:01.714187 | orchestrator | Tuesday 06 May 2025 00:35:01 +0000 (0:00:01.335) 0:05:39.873 *********** 2025-05-06 00:35:01.714223 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:35:01.775406 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:35:01.848080 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:35:01.909123 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:35:02.154843 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:35:02.156184 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:35:02.156983 | orchestrator | changed: [testbed-manager] 2025-05-06 00:35:02.157722 | orchestrator | 2025-05-06 00:35:02.158560 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-06 00:35:02.158973 | orchestrator | Tuesday 06 May 2025 00:35:02 +0000 (0:00:00.644) 0:05:40.517 *********** 2025-05-06 00:35:11.472351 | orchestrator | ok: [testbed-manager] 2025-05-06 00:35:11.472624 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:35:11.472681 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:35:11.473185 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:35:11.476257 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:35:11.477848 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:35:11.477908 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:35:11.477933 | orchestrator | 2025-05-06 00:35:11.480502 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-06 00:35:11.481507 | orchestrator | Tuesday 06 May 2025 00:35:11 +0000 (0:00:09.317) 0:05:49.834 *********** 2025-05-06 00:35:12.362927 | orchestrator | changed: [testbed-manager] 2025-05-06 00:35:12.363756 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:35:12.364905 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:35:12.366140 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:35:12.367074 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:35:12.368036 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:35:12.369404 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:35:12.369661 | orchestrator | 2025-05-06 00:35:12.369709 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-06 00:35:12.370211 | orchestrator | Tuesday 06 May 2025 00:35:12 +0000 (0:00:00.890) 0:05:50.725 *********** 2025-05-06 00:35:24.340800 | orchestrator | ok: [testbed-manager] 2025-05-06 00:35:24.340977 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:35:24.341001 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:35:24.341017 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:35:24.341037 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:35:24.342121 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:35:24.342391 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:35:24.342422 | orchestrator | 2025-05-06 00:35:24.342815 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-06 00:35:24.343236 | orchestrator | Tuesday 06 May 2025 00:35:24 +0000 (0:00:11.973) 0:06:02.699 *********** 2025-05-06 00:35:36.427511 | orchestrator | ok: [testbed-manager] 2025-05-06 00:35:36.427748 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:35:36.427778 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:35:36.427799 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:35:36.428498 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:35:36.430307 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:35:36.430952 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:35:36.431849 | orchestrator | 2025-05-06 00:35:36.432774 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-06 00:35:36.433140 | orchestrator | Tuesday 06 May 2025 00:35:36 +0000 (0:00:12.090) 0:06:14.789 *********** 2025-05-06 00:35:36.818880 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-06 00:35:37.530349 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-06 00:35:37.531030 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-06 00:35:37.538078 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-06 00:35:37.538318 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-06 00:35:37.539136 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-06 00:35:37.540132 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-06 00:35:37.541156 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-06 00:35:37.543297 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-06 00:35:37.544704 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-06 00:35:37.547726 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-06 00:35:37.548245 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-06 00:35:37.549078 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-06 00:35:37.549416 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-06 00:35:37.550218 | orchestrator | 2025-05-06 00:35:37.550626 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-06 00:35:37.551101 | orchestrator | Tuesday 06 May 2025 00:35:37 +0000 (0:00:01.102) 0:06:15.891 *********** 2025-05-06 00:35:37.670219 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:35:37.731410 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:35:37.792662 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:35:37.857626 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:35:37.918815 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:35:38.033065 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:35:38.034251 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:35:38.035644 | orchestrator | 2025-05-06 00:35:38.036875 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-06 00:35:38.037651 | orchestrator | Tuesday 06 May 2025 00:35:38 +0000 (0:00:00.502) 0:06:16.394 *********** 2025-05-06 00:35:41.583116 | orchestrator | ok: [testbed-manager] 2025-05-06 00:35:41.584649 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:35:41.586825 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:35:41.589778 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:35:41.589937 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:35:41.590492 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:35:41.591268 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:35:41.591887 | orchestrator | 2025-05-06 00:35:41.592722 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-06 00:35:41.593305 | orchestrator | Tuesday 06 May 2025 00:35:41 +0000 (0:00:03.551) 0:06:19.945 *********** 2025-05-06 00:35:41.701433 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:35:41.910764 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:35:41.974731 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:35:42.035055 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:35:42.102314 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:35:42.199396 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:35:42.199642 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:35:42.199679 | orchestrator | 2025-05-06 00:35:42.200069 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-06 00:35:42.200742 | orchestrator | Tuesday 06 May 2025 00:35:42 +0000 (0:00:00.616) 0:06:20.561 *********** 2025-05-06 00:35:42.276942 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-06 00:35:42.343564 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-06 00:35:42.343681 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:35:42.343781 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-06 00:35:42.344301 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-06 00:35:42.407230 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:35:42.408099 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-06 00:35:42.408429 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-06 00:35:42.473078 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:35:42.474210 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-06 00:35:42.474449 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-06 00:35:42.558002 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:35:42.558273 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-06 00:35:42.559349 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-06 00:35:42.625620 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:35:42.625844 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-06 00:35:42.626602 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-06 00:35:42.718992 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:35:42.720280 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-06 00:35:42.721544 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-06 00:35:42.721584 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:35:42.722576 | orchestrator | 2025-05-06 00:35:42.723826 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-06 00:35:42.724121 | orchestrator | Tuesday 06 May 2025 00:35:42 +0000 (0:00:00.519) 0:06:21.081 *********** 2025-05-06 00:35:42.848981 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:35:42.909838 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:35:42.990792 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:35:43.053015 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:35:43.117191 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:35:43.210200 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:35:43.210862 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:35:43.211726 | orchestrator | 2025-05-06 00:35:43.211767 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-06 00:35:43.212361 | orchestrator | Tuesday 06 May 2025 00:35:43 +0000 (0:00:00.491) 0:06:21.573 *********** 2025-05-06 00:35:43.334506 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:35:43.401762 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:35:43.465646 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:35:43.529795 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:35:43.602643 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:35:43.711833 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:35:43.712756 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:35:43.715925 | orchestrator | 2025-05-06 00:35:43.716503 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-06 00:35:43.718270 | orchestrator | Tuesday 06 May 2025 00:35:43 +0000 (0:00:00.498) 0:06:22.072 *********** 2025-05-06 00:35:43.852955 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:35:43.914069 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:35:43.977378 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:35:44.048337 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:35:44.121421 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:35:44.238008 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:35:44.238351 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:35:44.238952 | orchestrator | 2025-05-06 00:35:44.239737 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-06 00:35:44.240324 | orchestrator | Tuesday 06 May 2025 00:35:44 +0000 (0:00:00.527) 0:06:22.599 *********** 2025-05-06 00:35:50.073938 | orchestrator | ok: [testbed-manager] 2025-05-06 00:35:50.074407 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:35:50.075004 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:35:50.075434 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:35:50.076720 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:35:50.077133 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:35:50.077889 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:35:50.078412 | orchestrator | 2025-05-06 00:35:50.078795 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-06 00:35:50.079185 | orchestrator | Tuesday 06 May 2025 00:35:50 +0000 (0:00:05.836) 0:06:28.435 *********** 2025-05-06 00:35:50.921924 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:35:50.922804 | orchestrator | 2025-05-06 00:35:50.924012 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-06 00:35:50.924720 | orchestrator | Tuesday 06 May 2025 00:35:50 +0000 (0:00:00.850) 0:06:29.286 *********** 2025-05-06 00:35:51.338308 | orchestrator | ok: [testbed-manager] 2025-05-06 00:35:51.746833 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:35:51.747051 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:35:51.748703 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:35:51.748926 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:35:51.749299 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:35:51.750123 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:35:51.750374 | orchestrator | 2025-05-06 00:35:51.751673 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-06 00:35:51.751829 | orchestrator | Tuesday 06 May 2025 00:35:51 +0000 (0:00:00.824) 0:06:30.110 *********** 2025-05-06 00:35:52.331768 | orchestrator | ok: [testbed-manager] 2025-05-06 00:35:52.730160 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:35:52.730393 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:35:52.731244 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:35:52.732231 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:35:52.732844 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:35:52.733870 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:35:52.734452 | orchestrator | 2025-05-06 00:35:52.735223 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-06 00:35:52.736057 | orchestrator | Tuesday 06 May 2025 00:35:52 +0000 (0:00:00.982) 0:06:31.092 *********** 2025-05-06 00:35:54.044625 | orchestrator | ok: [testbed-manager] 2025-05-06 00:35:54.045154 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:35:54.046504 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:35:54.046571 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:35:54.047175 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:35:54.048168 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:35:54.049033 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:35:54.049511 | orchestrator | 2025-05-06 00:35:54.050164 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-06 00:35:54.050647 | orchestrator | Tuesday 06 May 2025 00:35:54 +0000 (0:00:01.312) 0:06:32.405 *********** 2025-05-06 00:35:54.181522 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:35:55.392028 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:35:55.392210 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:35:55.392811 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:35:55.393516 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:35:55.395327 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:35:55.396514 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:35:55.396558 | orchestrator | 2025-05-06 00:35:55.397157 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-06 00:35:55.398119 | orchestrator | Tuesday 06 May 2025 00:35:55 +0000 (0:00:01.349) 0:06:33.755 *********** 2025-05-06 00:35:56.656853 | orchestrator | ok: [testbed-manager] 2025-05-06 00:35:56.657589 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:35:56.658113 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:35:56.659010 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:35:56.659838 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:35:56.660535 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:35:56.661294 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:35:56.661913 | orchestrator | 2025-05-06 00:35:56.662632 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-06 00:35:56.663672 | orchestrator | Tuesday 06 May 2025 00:35:56 +0000 (0:00:01.261) 0:06:35.016 *********** 2025-05-06 00:35:58.052295 | orchestrator | changed: [testbed-manager] 2025-05-06 00:35:58.053290 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:35:58.053889 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:35:58.057366 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:35:58.058321 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:35:58.058357 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:35:58.058374 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:35:58.058396 | orchestrator | 2025-05-06 00:35:58.058675 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-06 00:35:58.059269 | orchestrator | Tuesday 06 May 2025 00:35:58 +0000 (0:00:01.395) 0:06:36.412 *********** 2025-05-06 00:35:59.073957 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:35:59.074790 | orchestrator | 2025-05-06 00:35:59.078189 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-06 00:36:00.384577 | orchestrator | Tuesday 06 May 2025 00:35:59 +0000 (0:00:01.021) 0:06:37.434 *********** 2025-05-06 00:36:00.384765 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:00.384878 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:36:00.387101 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:36:00.387680 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:36:00.388562 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:36:00.388919 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:36:00.389443 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:36:00.390121 | orchestrator | 2025-05-06 00:36:00.390855 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-06 00:36:00.391387 | orchestrator | Tuesday 06 May 2025 00:36:00 +0000 (0:00:01.310) 0:06:38.745 *********** 2025-05-06 00:36:01.475870 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:01.477272 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:36:01.478244 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:36:01.478556 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:36:01.479027 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:36:01.479505 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:36:01.480242 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:36:01.480633 | orchestrator | 2025-05-06 00:36:01.481109 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-06 00:36:01.481523 | orchestrator | Tuesday 06 May 2025 00:36:01 +0000 (0:00:01.093) 0:06:39.839 *********** 2025-05-06 00:36:02.563622 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:02.563864 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:36:02.563898 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:36:02.564507 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:36:02.566176 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:36:02.567549 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:36:02.568818 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:36:02.569384 | orchestrator | 2025-05-06 00:36:02.569962 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-06 00:36:02.572150 | orchestrator | Tuesday 06 May 2025 00:36:02 +0000 (0:00:01.085) 0:06:40.924 *********** 2025-05-06 00:36:03.838937 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:03.842609 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:36:03.844011 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:36:03.844044 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:36:03.844059 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:36:03.844075 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:36:03.844096 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:36:03.844387 | orchestrator | 2025-05-06 00:36:03.844940 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-06 00:36:03.845977 | orchestrator | Tuesday 06 May 2025 00:36:03 +0000 (0:00:01.276) 0:06:42.200 *********** 2025-05-06 00:36:04.987342 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:36:04.987578 | orchestrator | 2025-05-06 00:36:04.990280 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-06 00:36:04.990638 | orchestrator | Tuesday 06 May 2025 00:36:04 +0000 (0:00:00.872) 0:06:43.073 *********** 2025-05-06 00:36:04.990669 | orchestrator | 2025-05-06 00:36:04.990685 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-06 00:36:04.990705 | orchestrator | Tuesday 06 May 2025 00:36:04 +0000 (0:00:00.037) 0:06:43.110 *********** 2025-05-06 00:36:04.991485 | orchestrator | 2025-05-06 00:36:04.991540 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-06 00:36:04.991692 | orchestrator | Tuesday 06 May 2025 00:36:04 +0000 (0:00:00.037) 0:06:43.147 *********** 2025-05-06 00:36:04.992501 | orchestrator | 2025-05-06 00:36:04.993084 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-06 00:36:04.993411 | orchestrator | Tuesday 06 May 2025 00:36:04 +0000 (0:00:00.043) 0:06:43.190 *********** 2025-05-06 00:36:04.993759 | orchestrator | 2025-05-06 00:36:04.994146 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-06 00:36:04.994587 | orchestrator | Tuesday 06 May 2025 00:36:04 +0000 (0:00:00.037) 0:06:43.228 *********** 2025-05-06 00:36:04.995159 | orchestrator | 2025-05-06 00:36:04.995760 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-06 00:36:04.996186 | orchestrator | Tuesday 06 May 2025 00:36:04 +0000 (0:00:00.037) 0:06:43.265 *********** 2025-05-06 00:36:04.996378 | orchestrator | 2025-05-06 00:36:04.996662 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-06 00:36:04.996969 | orchestrator | Tuesday 06 May 2025 00:36:04 +0000 (0:00:00.044) 0:06:43.310 *********** 2025-05-06 00:36:04.997311 | orchestrator | 2025-05-06 00:36:04.997660 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-06 00:36:04.999952 | orchestrator | Tuesday 06 May 2025 00:36:04 +0000 (0:00:00.037) 0:06:43.347 *********** 2025-05-06 00:36:06.093186 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:36:06.093372 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:36:06.094177 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:36:06.094568 | orchestrator | 2025-05-06 00:36:06.094599 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-06 00:36:06.094620 | orchestrator | Tuesday 06 May 2025 00:36:06 +0000 (0:00:01.106) 0:06:44.453 *********** 2025-05-06 00:36:07.549296 | orchestrator | changed: [testbed-manager] 2025-05-06 00:36:07.551720 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:36:07.552882 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:36:07.554131 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:36:07.556520 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:36:07.560573 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:36:07.560618 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:36:07.562090 | orchestrator | 2025-05-06 00:36:07.562129 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-06 00:36:08.692750 | orchestrator | Tuesday 06 May 2025 00:36:07 +0000 (0:00:01.455) 0:06:45.909 *********** 2025-05-06 00:36:08.692907 | orchestrator | changed: [testbed-manager] 2025-05-06 00:36:08.692985 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:36:08.693182 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:36:08.693544 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:36:08.694367 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:36:08.694886 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:36:08.694972 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:36:08.696779 | orchestrator | 2025-05-06 00:36:08.696990 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-06 00:36:08.697543 | orchestrator | Tuesday 06 May 2025 00:36:08 +0000 (0:00:01.145) 0:06:47.055 *********** 2025-05-06 00:36:08.829748 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:36:10.723198 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:36:10.723396 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:36:10.724057 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:36:10.724980 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:36:10.726812 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:36:10.729092 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:36:10.729547 | orchestrator | 2025-05-06 00:36:10.730717 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-06 00:36:10.731773 | orchestrator | Tuesday 06 May 2025 00:36:10 +0000 (0:00:02.025) 0:06:49.081 *********** 2025-05-06 00:36:10.826611 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:36:10.827734 | orchestrator | 2025-05-06 00:36:10.828380 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-06 00:36:10.829749 | orchestrator | Tuesday 06 May 2025 00:36:10 +0000 (0:00:00.109) 0:06:49.190 *********** 2025-05-06 00:36:11.786252 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:11.787222 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:36:11.788433 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:36:11.789328 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:36:11.789887 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:36:11.790667 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:36:11.790918 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:36:11.791327 | orchestrator | 2025-05-06 00:36:11.791845 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-06 00:36:11.792579 | orchestrator | Tuesday 06 May 2025 00:36:11 +0000 (0:00:00.956) 0:06:50.146 *********** 2025-05-06 00:36:11.917424 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:36:11.979758 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:36:12.041835 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:36:12.256668 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:36:12.321832 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:36:12.431015 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:36:12.435684 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:36:12.436731 | orchestrator | 2025-05-06 00:36:12.437450 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-06 00:36:12.439254 | orchestrator | Tuesday 06 May 2025 00:36:12 +0000 (0:00:00.646) 0:06:50.793 *********** 2025-05-06 00:36:13.280857 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:36:13.282314 | orchestrator | 2025-05-06 00:36:13.282987 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-06 00:36:13.283930 | orchestrator | Tuesday 06 May 2025 00:36:13 +0000 (0:00:00.850) 0:06:51.643 *********** 2025-05-06 00:36:13.681005 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:14.124633 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:36:14.124822 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:36:14.124854 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:36:14.125045 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:36:14.127193 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:36:14.127581 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:36:14.128001 | orchestrator | 2025-05-06 00:36:14.128287 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-06 00:36:14.128985 | orchestrator | Tuesday 06 May 2025 00:36:14 +0000 (0:00:00.843) 0:06:52.487 *********** 2025-05-06 00:36:16.742162 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-06 00:36:16.743125 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-06 00:36:16.746421 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-06 00:36:16.746626 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-06 00:36:16.747910 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-06 00:36:16.748779 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-06 00:36:16.749632 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-06 00:36:16.749924 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-06 00:36:16.750757 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-06 00:36:16.751668 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-06 00:36:16.752619 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-06 00:36:16.752952 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-06 00:36:16.753644 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-06 00:36:16.754626 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-06 00:36:16.756232 | orchestrator | 2025-05-06 00:36:16.756514 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-06 00:36:16.756545 | orchestrator | Tuesday 06 May 2025 00:36:16 +0000 (0:00:02.615) 0:06:55.103 *********** 2025-05-06 00:36:16.888222 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:36:16.950545 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:36:17.014416 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:36:17.085390 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:36:17.145415 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:36:17.241771 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:36:17.241946 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:36:17.242651 | orchestrator | 2025-05-06 00:36:17.243711 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-06 00:36:17.247542 | orchestrator | Tuesday 06 May 2025 00:36:17 +0000 (0:00:00.500) 0:06:55.604 *********** 2025-05-06 00:36:18.056514 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:36:18.057273 | orchestrator | 2025-05-06 00:36:18.057318 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-06 00:36:18.058090 | orchestrator | Tuesday 06 May 2025 00:36:18 +0000 (0:00:00.814) 0:06:56.419 *********** 2025-05-06 00:36:18.456523 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:18.876851 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:36:18.877221 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:36:18.879091 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:36:18.879371 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:36:18.880762 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:36:18.881443 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:36:18.882307 | orchestrator | 2025-05-06 00:36:18.882611 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-06 00:36:18.883085 | orchestrator | Tuesday 06 May 2025 00:36:18 +0000 (0:00:00.820) 0:06:57.240 *********** 2025-05-06 00:36:19.287956 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:19.895300 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:36:19.895695 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:36:19.896971 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:36:19.900393 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:36:19.900923 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:36:19.900947 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:36:19.900962 | orchestrator | 2025-05-06 00:36:19.900987 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-06 00:36:19.901010 | orchestrator | Tuesday 06 May 2025 00:36:19 +0000 (0:00:01.018) 0:06:58.258 *********** 2025-05-06 00:36:20.017770 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:36:20.083717 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:36:20.147147 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:36:20.211137 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:36:20.279157 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:36:20.375961 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:36:20.376721 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:36:20.376921 | orchestrator | 2025-05-06 00:36:20.380337 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-06 00:36:21.760733 | orchestrator | Tuesday 06 May 2025 00:36:20 +0000 (0:00:00.479) 0:06:58.737 *********** 2025-05-06 00:36:21.761650 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:21.761723 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:36:21.761757 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:36:21.762575 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:36:21.764422 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:36:21.765177 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:36:21.766093 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:36:21.767131 | orchestrator | 2025-05-06 00:36:21.768343 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-06 00:36:21.769360 | orchestrator | Tuesday 06 May 2025 00:36:21 +0000 (0:00:01.383) 0:07:00.121 *********** 2025-05-06 00:36:21.887855 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:36:21.946996 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:36:22.013032 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:36:22.076930 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:36:22.135817 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:36:22.225903 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:36:22.226226 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:36:22.226678 | orchestrator | 2025-05-06 00:36:22.228420 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-06 00:36:22.228837 | orchestrator | Tuesday 06 May 2025 00:36:22 +0000 (0:00:00.468) 0:07:00.589 *********** 2025-05-06 00:36:24.200058 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:24.200332 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:36:24.200373 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:36:24.201158 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:36:24.201336 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:36:24.204147 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:36:24.204600 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:36:24.205350 | orchestrator | 2025-05-06 00:36:24.205825 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-06 00:36:24.206638 | orchestrator | Tuesday 06 May 2025 00:36:24 +0000 (0:00:01.971) 0:07:02.560 *********** 2025-05-06 00:36:25.489867 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:25.491776 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:36:25.492972 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:36:25.494371 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:36:25.495204 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:36:25.495887 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:36:25.496518 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:36:25.497398 | orchestrator | 2025-05-06 00:36:25.497880 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-06 00:36:25.498902 | orchestrator | Tuesday 06 May 2025 00:36:25 +0000 (0:00:01.290) 0:07:03.851 *********** 2025-05-06 00:36:27.169964 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:27.170277 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:36:27.173490 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:36:27.174059 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:36:27.174138 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:36:27.174166 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:36:27.174200 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:36:27.174293 | orchestrator | 2025-05-06 00:36:27.174633 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-06 00:36:27.175302 | orchestrator | Tuesday 06 May 2025 00:36:27 +0000 (0:00:01.681) 0:07:05.532 *********** 2025-05-06 00:36:28.781086 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:28.781705 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:36:28.781754 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:36:28.782161 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:36:28.784227 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:36:28.785521 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:36:28.786176 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:36:28.786803 | orchestrator | 2025-05-06 00:36:28.787591 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-06 00:36:28.788281 | orchestrator | Tuesday 06 May 2025 00:36:28 +0000 (0:00:01.609) 0:07:07.142 *********** 2025-05-06 00:36:29.337696 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:29.407646 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:36:29.839433 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:36:29.840979 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:36:29.843736 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:36:29.843778 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:36:29.843802 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:36:29.846346 | orchestrator | 2025-05-06 00:36:29.961883 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-06 00:36:29.961998 | orchestrator | Tuesday 06 May 2025 00:36:29 +0000 (0:00:01.059) 0:07:08.201 *********** 2025-05-06 00:36:29.962100 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:36:30.027748 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:36:30.089828 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:36:30.161941 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:36:30.229767 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:36:30.605951 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:36:30.606510 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:36:30.606990 | orchestrator | 2025-05-06 00:36:30.607591 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-06 00:36:30.608585 | orchestrator | Tuesday 06 May 2025 00:36:30 +0000 (0:00:00.766) 0:07:08.968 *********** 2025-05-06 00:36:30.743401 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:36:30.804775 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:36:30.875260 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:36:30.939807 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:36:31.016308 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:36:31.112734 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:36:31.112930 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:36:31.113422 | orchestrator | 2025-05-06 00:36:31.113496 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-06 00:36:31.113878 | orchestrator | Tuesday 06 May 2025 00:36:31 +0000 (0:00:00.508) 0:07:09.477 *********** 2025-05-06 00:36:31.235642 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:31.302480 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:36:31.363428 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:36:31.424935 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:36:31.492418 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:36:31.583720 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:36:31.583929 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:36:31.584276 | orchestrator | 2025-05-06 00:36:31.584617 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-06 00:36:31.586489 | orchestrator | Tuesday 06 May 2025 00:36:31 +0000 (0:00:00.468) 0:07:09.945 *********** 2025-05-06 00:36:31.860559 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:31.924890 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:36:31.987379 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:36:32.066443 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:36:32.132049 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:36:32.235831 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:36:32.236150 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:36:32.236735 | orchestrator | 2025-05-06 00:36:32.237401 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-06 00:36:32.240096 | orchestrator | Tuesday 06 May 2025 00:36:32 +0000 (0:00:00.652) 0:07:10.598 *********** 2025-05-06 00:36:32.369718 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:32.431399 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:36:32.503581 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:36:32.570378 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:36:32.635003 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:36:32.738411 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:36:32.738741 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:36:32.739244 | orchestrator | 2025-05-06 00:36:32.739984 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-06 00:36:32.740702 | orchestrator | Tuesday 06 May 2025 00:36:32 +0000 (0:00:00.504) 0:07:11.102 *********** 2025-05-06 00:36:38.407606 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:38.407868 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:36:38.408318 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:36:38.409044 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:36:38.409575 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:36:38.410000 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:36:38.410556 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:36:38.414185 | orchestrator | 2025-05-06 00:36:38.543063 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-06 00:36:38.543185 | orchestrator | Tuesday 06 May 2025 00:36:38 +0000 (0:00:05.666) 0:07:16.769 *********** 2025-05-06 00:36:38.543219 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:36:38.605198 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:36:38.668354 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:36:38.746506 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:36:38.808930 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:36:38.916400 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:36:38.916626 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:36:38.917164 | orchestrator | 2025-05-06 00:36:38.917428 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-06 00:36:38.918206 | orchestrator | Tuesday 06 May 2025 00:36:38 +0000 (0:00:00.509) 0:07:17.279 *********** 2025-05-06 00:36:39.837057 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:36:39.837595 | orchestrator | 2025-05-06 00:36:39.840341 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-06 00:36:41.566792 | orchestrator | Tuesday 06 May 2025 00:36:39 +0000 (0:00:00.918) 0:07:18.198 *********** 2025-05-06 00:36:41.566971 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:41.567057 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:36:41.567527 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:36:41.569962 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:36:41.571870 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:36:41.573973 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:36:41.575119 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:36:41.575179 | orchestrator | 2025-05-06 00:36:41.575238 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-06 00:36:41.575727 | orchestrator | Tuesday 06 May 2025 00:36:41 +0000 (0:00:01.729) 0:07:19.927 *********** 2025-05-06 00:36:42.687623 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:42.689225 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:36:42.689277 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:36:42.689705 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:36:42.691806 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:36:42.691965 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:36:42.692638 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:36:42.693506 | orchestrator | 2025-05-06 00:36:42.693942 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-06 00:36:42.694697 | orchestrator | Tuesday 06 May 2025 00:36:42 +0000 (0:00:01.121) 0:07:21.049 *********** 2025-05-06 00:36:43.486390 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:43.486795 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:36:43.487681 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:36:43.488298 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:36:43.489754 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:36:43.490062 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:36:43.490096 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:36:43.490494 | orchestrator | 2025-05-06 00:36:43.491227 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-06 00:36:43.491560 | orchestrator | Tuesday 06 May 2025 00:36:43 +0000 (0:00:00.799) 0:07:21.848 *********** 2025-05-06 00:36:45.421821 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-06 00:36:45.422352 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-06 00:36:45.423229 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-06 00:36:45.423854 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-06 00:36:45.427738 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-06 00:36:45.427831 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-06 00:36:45.428434 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-06 00:36:45.429259 | orchestrator | 2025-05-06 00:36:45.429955 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-06 00:36:45.430356 | orchestrator | Tuesday 06 May 2025 00:36:45 +0000 (0:00:01.935) 0:07:23.783 *********** 2025-05-06 00:36:46.176577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:36:46.177330 | orchestrator | 2025-05-06 00:36:46.178267 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-06 00:36:46.179600 | orchestrator | Tuesday 06 May 2025 00:36:46 +0000 (0:00:00.755) 0:07:24.539 *********** 2025-05-06 00:36:55.123430 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:36:55.124971 | orchestrator | changed: [testbed-manager] 2025-05-06 00:36:55.125313 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:36:55.129599 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:36:55.129839 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:36:55.129864 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:36:55.129879 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:36:55.129898 | orchestrator | 2025-05-06 00:36:55.130738 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-06 00:36:55.131726 | orchestrator | Tuesday 06 May 2025 00:36:55 +0000 (0:00:08.944) 0:07:33.483 *********** 2025-05-06 00:36:56.793224 | orchestrator | ok: [testbed-manager] 2025-05-06 00:36:56.793969 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:36:56.797551 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:36:56.798333 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:36:56.798376 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:36:56.798401 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:36:56.798418 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:36:56.798434 | orchestrator | 2025-05-06 00:36:56.798482 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-06 00:36:56.798507 | orchestrator | Tuesday 06 May 2025 00:36:56 +0000 (0:00:01.670) 0:07:35.154 *********** 2025-05-06 00:36:58.075289 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:36:58.075559 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:36:58.076523 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:36:58.077413 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:36:58.077991 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:36:58.078670 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:36:58.079603 | orchestrator | 2025-05-06 00:36:58.080123 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-06 00:36:58.080836 | orchestrator | Tuesday 06 May 2025 00:36:58 +0000 (0:00:01.281) 0:07:36.435 *********** 2025-05-06 00:36:59.499650 | orchestrator | changed: [testbed-manager] 2025-05-06 00:36:59.500573 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:36:59.504074 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:36:59.504823 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:36:59.505741 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:36:59.506197 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:36:59.507031 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:36:59.508113 | orchestrator | 2025-05-06 00:36:59.509295 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-06 00:36:59.509992 | orchestrator | 2025-05-06 00:36:59.510069 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-06 00:36:59.512144 | orchestrator | Tuesday 06 May 2025 00:36:59 +0000 (0:00:01.427) 0:07:37.863 *********** 2025-05-06 00:36:59.646100 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:36:59.703612 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:36:59.758522 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:36:59.821893 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:36:59.878345 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:36:59.987084 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:36:59.988173 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:36:59.988643 | orchestrator | 2025-05-06 00:36:59.989760 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-06 00:36:59.992432 | orchestrator | 2025-05-06 00:36:59.993124 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-06 00:36:59.993776 | orchestrator | Tuesday 06 May 2025 00:36:59 +0000 (0:00:00.486) 0:07:38.349 *********** 2025-05-06 00:37:01.306497 | orchestrator | changed: [testbed-manager] 2025-05-06 00:37:01.306697 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:37:01.308148 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:37:01.308781 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:37:01.309348 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:37:01.309870 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:37:01.310292 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:37:01.311008 | orchestrator | 2025-05-06 00:37:01.311271 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-06 00:37:01.313977 | orchestrator | Tuesday 06 May 2025 00:37:01 +0000 (0:00:01.320) 0:07:39.670 *********** 2025-05-06 00:37:02.781168 | orchestrator | ok: [testbed-manager] 2025-05-06 00:37:02.782591 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:37:02.783676 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:37:02.783950 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:37:02.785594 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:37:02.786603 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:37:02.788358 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:37:02.789521 | orchestrator | 2025-05-06 00:37:02.790704 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-06 00:37:02.791667 | orchestrator | Tuesday 06 May 2025 00:37:02 +0000 (0:00:01.472) 0:07:41.142 *********** 2025-05-06 00:37:02.900701 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:37:03.132493 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:37:03.209045 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:37:03.274544 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:37:03.352404 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:37:03.757424 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:37:03.758425 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:37:03.758796 | orchestrator | 2025-05-06 00:37:03.760050 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-06 00:37:03.763618 | orchestrator | Tuesday 06 May 2025 00:37:03 +0000 (0:00:00.977) 0:07:42.120 *********** 2025-05-06 00:37:05.024181 | orchestrator | changed: [testbed-manager] 2025-05-06 00:37:05.025095 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:37:05.026120 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:37:05.026196 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:37:05.028280 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:37:05.028355 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:37:05.029041 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:37:05.030094 | orchestrator | 2025-05-06 00:37:05.030719 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-06 00:37:05.031199 | orchestrator | 2025-05-06 00:37:05.031968 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-06 00:37:05.032331 | orchestrator | Tuesday 06 May 2025 00:37:05 +0000 (0:00:01.265) 0:07:43.385 *********** 2025-05-06 00:37:05.808265 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:37:05.809124 | orchestrator | 2025-05-06 00:37:05.809391 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-06 00:37:05.809809 | orchestrator | Tuesday 06 May 2025 00:37:05 +0000 (0:00:00.784) 0:07:44.170 *********** 2025-05-06 00:37:06.268576 | orchestrator | ok: [testbed-manager] 2025-05-06 00:37:06.838685 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:37:06.839105 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:37:06.839889 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:37:06.840576 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:37:06.841558 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:37:06.842501 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:37:06.842662 | orchestrator | 2025-05-06 00:37:06.843436 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-06 00:37:06.843873 | orchestrator | Tuesday 06 May 2025 00:37:06 +0000 (0:00:01.032) 0:07:45.202 *********** 2025-05-06 00:37:07.934646 | orchestrator | changed: [testbed-manager] 2025-05-06 00:37:07.934876 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:37:07.936001 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:37:07.937022 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:37:07.937622 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:37:07.938627 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:37:07.939030 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:37:07.940568 | orchestrator | 2025-05-06 00:37:07.941591 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-06 00:37:07.942430 | orchestrator | Tuesday 06 May 2025 00:37:07 +0000 (0:00:01.093) 0:07:46.295 *********** 2025-05-06 00:37:08.889436 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:37:08.890429 | orchestrator | 2025-05-06 00:37:08.890809 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-06 00:37:08.891698 | orchestrator | Tuesday 06 May 2025 00:37:08 +0000 (0:00:00.954) 0:07:47.250 *********** 2025-05-06 00:37:09.295581 | orchestrator | ok: [testbed-manager] 2025-05-06 00:37:09.731934 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:37:09.732616 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:37:09.732765 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:37:09.733376 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:37:09.733634 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:37:09.734332 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:37:09.734794 | orchestrator | 2025-05-06 00:37:09.736026 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-06 00:37:09.736412 | orchestrator | Tuesday 06 May 2025 00:37:09 +0000 (0:00:00.843) 0:07:48.094 *********** 2025-05-06 00:37:10.810242 | orchestrator | changed: [testbed-manager] 2025-05-06 00:37:10.810423 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:37:10.811476 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:37:10.812432 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:37:10.812796 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:37:10.813723 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:37:10.814248 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:37:10.815021 | orchestrator | 2025-05-06 00:37:10.815831 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:37:10.816477 | orchestrator | 2025-05-06 00:37:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-06 00:37:10.816552 | orchestrator | 2025-05-06 00:37:10 | INFO  | Please wait and do not abort execution. 2025-05-06 00:37:10.817676 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-06 00:37:10.818289 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-06 00:37:10.818621 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-06 00:37:10.819432 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-06 00:37:10.820864 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-06 00:37:10.821102 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-06 00:37:10.821708 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-06 00:37:10.822222 | orchestrator | 2025-05-06 00:37:10.823316 | orchestrator | Tuesday 06 May 2025 00:37:10 +0000 (0:00:01.078) 0:07:49.173 *********** 2025-05-06 00:37:10.823632 | orchestrator | =============================================================================== 2025-05-06 00:37:10.824547 | orchestrator | osism.commons.packages : Install required packages --------------------- 83.43s 2025-05-06 00:37:10.824916 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.63s 2025-05-06 00:37:10.825579 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.37s 2025-05-06 00:37:10.826263 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.50s 2025-05-06 00:37:10.826983 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.09s 2025-05-06 00:37:10.827675 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 11.97s 2025-05-06 00:37:10.828338 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.78s 2025-05-06 00:37:10.828946 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.99s 2025-05-06 00:37:10.829401 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.32s 2025-05-06 00:37:10.829825 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.94s 2025-05-06 00:37:10.830386 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.50s 2025-05-06 00:37:10.831390 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.07s 2025-05-06 00:37:10.832347 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.00s 2025-05-06 00:37:10.832985 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.68s 2025-05-06 00:37:10.833789 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.32s 2025-05-06 00:37:10.834469 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.60s 2025-05-06 00:37:10.835500 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 5.84s 2025-05-06 00:37:10.837437 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.75s 2025-05-06 00:37:10.838127 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.74s 2025-05-06 00:37:10.838535 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.67s 2025-05-06 00:37:11.345647 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-06 00:37:13.126013 | orchestrator | + osism apply network 2025-05-06 00:37:13.126208 | orchestrator | 2025-05-06 00:37:13 | INFO  | Task e455043b-7f6e-4de7-b487-64aff64faefc (network) was prepared for execution. 2025-05-06 00:37:16.315919 | orchestrator | 2025-05-06 00:37:13 | INFO  | It takes a moment until task e455043b-7f6e-4de7-b487-64aff64faefc (network) has been started and output is visible here. 2025-05-06 00:37:16.316060 | orchestrator | 2025-05-06 00:37:16.316141 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-06 00:37:16.316551 | orchestrator | 2025-05-06 00:37:16.316796 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-06 00:37:16.316829 | orchestrator | Tuesday 06 May 2025 00:37:16 +0000 (0:00:00.193) 0:00:00.193 *********** 2025-05-06 00:37:16.474207 | orchestrator | ok: [testbed-manager] 2025-05-06 00:37:16.549987 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:37:16.624940 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:37:16.699367 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:37:16.775655 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:37:16.987993 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:37:16.988686 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:37:16.990179 | orchestrator | 2025-05-06 00:37:16.990608 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-06 00:37:16.991736 | orchestrator | Tuesday 06 May 2025 00:37:16 +0000 (0:00:00.671) 0:00:00.865 *********** 2025-05-06 00:37:18.148699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:37:18.149243 | orchestrator | 2025-05-06 00:37:18.149697 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-06 00:37:18.149729 | orchestrator | Tuesday 06 May 2025 00:37:18 +0000 (0:00:01.159) 0:00:02.024 *********** 2025-05-06 00:37:20.006652 | orchestrator | ok: [testbed-manager] 2025-05-06 00:37:20.007637 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:37:20.007718 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:37:20.009208 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:37:20.009342 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:37:20.014007 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:37:20.014163 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:37:20.014181 | orchestrator | 2025-05-06 00:37:20.014199 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-06 00:37:20.014651 | orchestrator | Tuesday 06 May 2025 00:37:19 +0000 (0:00:01.858) 0:00:03.883 *********** 2025-05-06 00:37:21.762691 | orchestrator | ok: [testbed-manager] 2025-05-06 00:37:21.765189 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:37:21.765741 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:37:21.765775 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:37:21.769037 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:37:21.770273 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:37:21.770314 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:37:21.770346 | orchestrator | 2025-05-06 00:37:21.772077 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-06 00:37:21.772429 | orchestrator | Tuesday 06 May 2025 00:37:21 +0000 (0:00:01.754) 0:00:05.638 *********** 2025-05-06 00:37:22.246692 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-06 00:37:22.826750 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-06 00:37:22.826898 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-06 00:37:22.828185 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-06 00:37:22.830659 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-06 00:37:22.830772 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-06 00:37:22.831412 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-06 00:37:22.832522 | orchestrator | 2025-05-06 00:37:22.834521 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-06 00:37:22.835281 | orchestrator | Tuesday 06 May 2025 00:37:22 +0000 (0:00:01.066) 0:00:06.704 *********** 2025-05-06 00:37:24.560244 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-06 00:37:24.561232 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-06 00:37:24.562579 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-06 00:37:24.563878 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-06 00:37:24.565030 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-06 00:37:24.566477 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-06 00:37:24.567208 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-06 00:37:24.567902 | orchestrator | 2025-05-06 00:37:24.568991 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-06 00:37:24.569540 | orchestrator | Tuesday 06 May 2025 00:37:24 +0000 (0:00:01.735) 0:00:08.440 *********** 2025-05-06 00:37:26.158315 | orchestrator | changed: [testbed-manager] 2025-05-06 00:37:26.158941 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:37:26.162293 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:37:26.162405 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:37:26.162425 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:37:26.162440 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:37:26.162509 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:37:26.164131 | orchestrator | 2025-05-06 00:37:26.165075 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-06 00:37:26.165923 | orchestrator | Tuesday 06 May 2025 00:37:26 +0000 (0:00:01.592) 0:00:10.033 *********** 2025-05-06 00:37:26.791022 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-06 00:37:27.204046 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-06 00:37:27.204214 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-06 00:37:27.204651 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-06 00:37:27.206010 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-06 00:37:27.207005 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-06 00:37:27.207152 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-06 00:37:27.209230 | orchestrator | 2025-05-06 00:37:27.210344 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-06 00:37:27.211445 | orchestrator | Tuesday 06 May 2025 00:37:27 +0000 (0:00:01.052) 0:00:11.085 *********** 2025-05-06 00:37:27.617548 | orchestrator | ok: [testbed-manager] 2025-05-06 00:37:27.696559 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:37:28.290606 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:37:28.292084 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:37:28.293031 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:37:28.294194 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:37:28.295319 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:37:28.296066 | orchestrator | 2025-05-06 00:37:28.296804 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-06 00:37:28.297717 | orchestrator | Tuesday 06 May 2025 00:37:28 +0000 (0:00:01.081) 0:00:12.166 *********** 2025-05-06 00:37:28.447439 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:37:28.523584 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:37:28.601383 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:37:28.672906 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:37:28.750133 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:37:29.005404 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:37:29.006007 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:37:29.007037 | orchestrator | 2025-05-06 00:37:29.009184 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-06 00:37:29.010159 | orchestrator | Tuesday 06 May 2025 00:37:28 +0000 (0:00:00.716) 0:00:12.882 *********** 2025-05-06 00:37:31.056079 | orchestrator | ok: [testbed-manager] 2025-05-06 00:37:31.056250 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:37:31.056279 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:37:31.056948 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:37:31.056980 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:37:31.057251 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:37:31.057695 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:37:31.057911 | orchestrator | 2025-05-06 00:37:31.058314 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-06 00:37:31.058781 | orchestrator | Tuesday 06 May 2025 00:37:31 +0000 (0:00:02.052) 0:00:14.935 *********** 2025-05-06 00:37:32.824141 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-06 00:37:32.826145 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-06 00:37:32.847739 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-06 00:37:34.296805 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-06 00:37:34.296928 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-06 00:37:34.296949 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-06 00:37:34.296965 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-06 00:37:34.296980 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-06 00:37:34.296995 | orchestrator | 2025-05-06 00:37:34.297011 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-06 00:37:34.297027 | orchestrator | Tuesday 06 May 2025 00:37:32 +0000 (0:00:01.761) 0:00:16.696 *********** 2025-05-06 00:37:34.297057 | orchestrator | ok: [testbed-manager] 2025-05-06 00:37:34.297164 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:37:34.298423 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:37:34.299152 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:37:34.301056 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:37:34.301385 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:37:34.302008 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:37:34.302445 | orchestrator | 2025-05-06 00:37:34.303300 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-06 00:37:34.304034 | orchestrator | Tuesday 06 May 2025 00:37:34 +0000 (0:00:01.479) 0:00:18.176 *********** 2025-05-06 00:37:35.700122 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:37:35.701685 | orchestrator | 2025-05-06 00:37:35.701726 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-06 00:37:35.703203 | orchestrator | Tuesday 06 May 2025 00:37:35 +0000 (0:00:01.391) 0:00:19.567 *********** 2025-05-06 00:37:36.269046 | orchestrator | ok: [testbed-manager] 2025-05-06 00:37:36.686524 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:37:36.690303 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:37:36.690523 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:37:36.690566 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:37:36.690599 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:37:36.691552 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:37:36.692366 | orchestrator | 2025-05-06 00:37:36.693110 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-06 00:37:36.695015 | orchestrator | Tuesday 06 May 2025 00:37:36 +0000 (0:00:00.997) 0:00:20.565 *********** 2025-05-06 00:37:36.837700 | orchestrator | ok: [testbed-manager] 2025-05-06 00:37:36.920001 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:37:37.136023 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:37:37.218582 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:37:37.298664 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:37:37.476527 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:37:37.476976 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:37:37.477739 | orchestrator | 2025-05-06 00:37:37.478196 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-06 00:37:37.481270 | orchestrator | Tuesday 06 May 2025 00:37:37 +0000 (0:00:00.788) 0:00:21.354 *********** 2025-05-06 00:37:37.857017 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-06 00:37:37.857220 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-06 00:37:38.021339 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-06 00:37:38.021580 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-06 00:37:38.465535 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-06 00:37:38.466085 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-06 00:37:38.466659 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-06 00:37:38.467339 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-06 00:37:38.468321 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-06 00:37:38.471091 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-06 00:37:38.471207 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-06 00:37:38.471250 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-06 00:37:38.471976 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-06 00:37:38.472808 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-06 00:37:38.473446 | orchestrator | 2025-05-06 00:37:38.474638 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-06 00:37:38.475258 | orchestrator | Tuesday 06 May 2025 00:37:38 +0000 (0:00:00.991) 0:00:22.345 *********** 2025-05-06 00:37:38.771065 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:37:38.852384 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:37:38.932116 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:37:39.010131 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:37:39.088976 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:37:40.202777 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:37:40.203261 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:37:40.203301 | orchestrator | 2025-05-06 00:37:40.203954 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-06 00:37:40.204397 | orchestrator | Tuesday 06 May 2025 00:37:40 +0000 (0:00:01.733) 0:00:24.079 *********** 2025-05-06 00:37:40.366418 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:37:40.448742 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:37:40.694659 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:37:40.782991 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:37:40.862627 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:37:40.901523 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:37:40.901679 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:37:40.902763 | orchestrator | 2025-05-06 00:37:40.903268 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:37:40.903571 | orchestrator | 2025-05-06 00:37:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-06 00:37:40.903647 | orchestrator | 2025-05-06 00:37:40 | INFO  | Please wait and do not abort execution. 2025-05-06 00:37:40.904661 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-06 00:37:40.905315 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-06 00:37:40.905953 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-06 00:37:40.906296 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-06 00:37:40.906693 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-06 00:37:40.907284 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-06 00:37:40.907915 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-06 00:37:40.909355 | orchestrator | 2025-05-06 00:37:40.909444 | orchestrator | Tuesday 06 May 2025 00:37:40 +0000 (0:00:00.702) 0:00:24.782 *********** 2025-05-06 00:37:40.909496 | orchestrator | =============================================================================== 2025-05-06 00:37:40.909717 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.05s 2025-05-06 00:37:40.909944 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.86s 2025-05-06 00:37:40.910093 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.76s 2025-05-06 00:37:40.910795 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.76s 2025-05-06 00:37:40.911184 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.74s 2025-05-06 00:37:40.911547 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.73s 2025-05-06 00:37:40.912042 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.59s 2025-05-06 00:37:40.912422 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.48s 2025-05-06 00:37:40.913132 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.39s 2025-05-06 00:37:40.913307 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.16s 2025-05-06 00:37:40.913670 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.08s 2025-05-06 00:37:40.913987 | orchestrator | osism.commons.network : Create required directories --------------------- 1.07s 2025-05-06 00:37:40.914309 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.05s 2025-05-06 00:37:40.914617 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.00s 2025-05-06 00:37:40.914982 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 0.99s 2025-05-06 00:37:40.915296 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.79s 2025-05-06 00:37:40.915633 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.72s 2025-05-06 00:37:40.915885 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.70s 2025-05-06 00:37:40.916177 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.67s 2025-05-06 00:37:41.367520 | orchestrator | + osism apply wireguard 2025-05-06 00:37:42.739166 | orchestrator | 2025-05-06 00:37:42 | INFO  | Task accf68a0-9734-49c1-b653-8f22f07133a1 (wireguard) was prepared for execution. 2025-05-06 00:37:45.791533 | orchestrator | 2025-05-06 00:37:42 | INFO  | It takes a moment until task accf68a0-9734-49c1-b653-8f22f07133a1 (wireguard) has been started and output is visible here. 2025-05-06 00:37:45.791686 | orchestrator | 2025-05-06 00:37:45.792011 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-06 00:37:45.793070 | orchestrator | 2025-05-06 00:37:45.797101 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-06 00:37:45.797516 | orchestrator | Tuesday 06 May 2025 00:37:45 +0000 (0:00:00.169) 0:00:00.169 *********** 2025-05-06 00:37:47.227823 | orchestrator | ok: [testbed-manager] 2025-05-06 00:37:47.228202 | orchestrator | 2025-05-06 00:37:47.228233 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-06 00:37:47.228514 | orchestrator | Tuesday 06 May 2025 00:37:47 +0000 (0:00:01.437) 0:00:01.607 *********** 2025-05-06 00:37:53.246927 | orchestrator | changed: [testbed-manager] 2025-05-06 00:37:53.247752 | orchestrator | 2025-05-06 00:37:53.248023 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-06 00:37:53.248852 | orchestrator | Tuesday 06 May 2025 00:37:53 +0000 (0:00:06.017) 0:00:07.625 *********** 2025-05-06 00:37:53.796228 | orchestrator | changed: [testbed-manager] 2025-05-06 00:37:53.797218 | orchestrator | 2025-05-06 00:37:53.798096 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-06 00:37:53.798932 | orchestrator | Tuesday 06 May 2025 00:37:53 +0000 (0:00:00.548) 0:00:08.174 *********** 2025-05-06 00:37:54.212922 | orchestrator | changed: [testbed-manager] 2025-05-06 00:37:54.213373 | orchestrator | 2025-05-06 00:37:54.213898 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-06 00:37:54.214728 | orchestrator | Tuesday 06 May 2025 00:37:54 +0000 (0:00:00.418) 0:00:08.593 *********** 2025-05-06 00:37:54.717293 | orchestrator | ok: [testbed-manager] 2025-05-06 00:37:54.717857 | orchestrator | 2025-05-06 00:37:54.717912 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-06 00:37:54.718514 | orchestrator | Tuesday 06 May 2025 00:37:54 +0000 (0:00:00.501) 0:00:09.094 *********** 2025-05-06 00:37:55.218303 | orchestrator | ok: [testbed-manager] 2025-05-06 00:37:55.218910 | orchestrator | 2025-05-06 00:37:55.259266 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-06 00:37:55.635541 | orchestrator | Tuesday 06 May 2025 00:37:55 +0000 (0:00:00.504) 0:00:09.598 *********** 2025-05-06 00:37:55.635679 | orchestrator | ok: [testbed-manager] 2025-05-06 00:37:55.636710 | orchestrator | 2025-05-06 00:37:55.638090 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-06 00:37:55.638821 | orchestrator | Tuesday 06 May 2025 00:37:55 +0000 (0:00:00.411) 0:00:10.010 *********** 2025-05-06 00:37:56.766271 | orchestrator | changed: [testbed-manager] 2025-05-06 00:37:56.766443 | orchestrator | 2025-05-06 00:37:56.767828 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-06 00:37:56.769023 | orchestrator | Tuesday 06 May 2025 00:37:56 +0000 (0:00:01.134) 0:00:11.144 *********** 2025-05-06 00:37:57.642765 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-06 00:37:57.643382 | orchestrator | changed: [testbed-manager] 2025-05-06 00:37:57.644634 | orchestrator | 2025-05-06 00:37:57.644803 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-06 00:37:57.645250 | orchestrator | Tuesday 06 May 2025 00:37:57 +0000 (0:00:00.874) 0:00:12.019 *********** 2025-05-06 00:37:59.345379 | orchestrator | changed: [testbed-manager] 2025-05-06 00:37:59.346571 | orchestrator | 2025-05-06 00:37:59.347176 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-06 00:37:59.348219 | orchestrator | Tuesday 06 May 2025 00:37:59 +0000 (0:00:01.704) 0:00:13.723 *********** 2025-05-06 00:38:00.263035 | orchestrator | changed: [testbed-manager] 2025-05-06 00:38:00.263322 | orchestrator | 2025-05-06 00:38:00.264122 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:38:00.264417 | orchestrator | 2025-05-06 00:38:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-06 00:38:00.264682 | orchestrator | 2025-05-06 00:38:00 | INFO  | Please wait and do not abort execution. 2025-05-06 00:38:00.265836 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:38:00.266695 | orchestrator | 2025-05-06 00:38:00.267314 | orchestrator | Tuesday 06 May 2025 00:38:00 +0000 (0:00:00.917) 0:00:14.641 *********** 2025-05-06 00:38:00.267889 | orchestrator | =============================================================================== 2025-05-06 00:38:00.268439 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.02s 2025-05-06 00:38:00.269119 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.70s 2025-05-06 00:38:00.269559 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.44s 2025-05-06 00:38:00.270309 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.13s 2025-05-06 00:38:00.270746 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.92s 2025-05-06 00:38:00.271158 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.87s 2025-05-06 00:38:00.271552 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2025-05-06 00:38:00.271996 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.50s 2025-05-06 00:38:00.272325 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.50s 2025-05-06 00:38:00.272840 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2025-05-06 00:38:00.273912 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2025-05-06 00:38:00.724490 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-06 00:38:00.762653 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-06 00:38:00.860119 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-06 00:38:00.860254 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 143 0 --:--:-- --:--:-- --:--:-- 144 2025-05-06 00:38:00.875337 | orchestrator | + osism apply --environment custom workarounds 2025-05-06 00:38:02.209871 | orchestrator | 2025-05-06 00:38:02 | INFO  | Trying to run play workarounds in environment custom 2025-05-06 00:38:02.257199 | orchestrator | 2025-05-06 00:38:02 | INFO  | Task b846aa3c-c12b-4e80-84f9-9af66ea41cbe (workarounds) was prepared for execution. 2025-05-06 00:38:05.281031 | orchestrator | 2025-05-06 00:38:02 | INFO  | It takes a moment until task b846aa3c-c12b-4e80-84f9-9af66ea41cbe (workarounds) has been started and output is visible here. 2025-05-06 00:38:05.281202 | orchestrator | 2025-05-06 00:38:05.282012 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 00:38:05.285247 | orchestrator | 2025-05-06 00:38:05.442442 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-06 00:38:05.442641 | orchestrator | Tuesday 06 May 2025 00:38:05 +0000 (0:00:00.135) 0:00:00.135 *********** 2025-05-06 00:38:05.442677 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-06 00:38:05.522564 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-06 00:38:05.611820 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-06 00:38:05.693057 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-06 00:38:05.787554 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-06 00:38:06.057113 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-06 00:38:06.057751 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-06 00:38:06.058437 | orchestrator | 2025-05-06 00:38:06.059543 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-06 00:38:06.060235 | orchestrator | 2025-05-06 00:38:06.060561 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-06 00:38:06.061421 | orchestrator | Tuesday 06 May 2025 00:38:06 +0000 (0:00:00.778) 0:00:00.913 *********** 2025-05-06 00:38:08.581721 | orchestrator | ok: [testbed-manager] 2025-05-06 00:38:08.582483 | orchestrator | 2025-05-06 00:38:08.585120 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-06 00:38:08.586443 | orchestrator | 2025-05-06 00:38:08.586520 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-06 00:38:08.587650 | orchestrator | Tuesday 06 May 2025 00:38:08 +0000 (0:00:02.519) 0:00:03.432 *********** 2025-05-06 00:38:10.378267 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:38:10.379564 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:38:10.379640 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:38:10.382558 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:38:10.383177 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:38:10.383206 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:38:10.383225 | orchestrator | 2025-05-06 00:38:10.383925 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-06 00:38:10.384789 | orchestrator | 2025-05-06 00:38:10.385569 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-06 00:38:10.386475 | orchestrator | Tuesday 06 May 2025 00:38:10 +0000 (0:00:01.800) 0:00:05.233 *********** 2025-05-06 00:38:11.839988 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-06 00:38:11.841092 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-06 00:38:11.842446 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-06 00:38:11.842619 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-06 00:38:11.843960 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-06 00:38:11.844348 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-06 00:38:11.844851 | orchestrator | 2025-05-06 00:38:11.845369 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-06 00:38:11.845887 | orchestrator | Tuesday 06 May 2025 00:38:11 +0000 (0:00:01.459) 0:00:06.692 *********** 2025-05-06 00:38:15.428283 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:38:15.428449 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:38:15.431096 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:38:15.433477 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:38:15.434362 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:38:15.434485 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:38:15.435426 | orchestrator | 2025-05-06 00:38:15.436545 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-06 00:38:15.437205 | orchestrator | Tuesday 06 May 2025 00:38:15 +0000 (0:00:03.590) 0:00:10.283 *********** 2025-05-06 00:38:15.570732 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:38:15.647567 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:38:15.726648 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:38:15.947737 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:38:16.085839 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:38:16.086560 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:38:16.086606 | orchestrator | 2025-05-06 00:38:16.087206 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-06 00:38:16.090961 | orchestrator | 2025-05-06 00:38:16.091492 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-06 00:38:16.091930 | orchestrator | Tuesday 06 May 2025 00:38:16 +0000 (0:00:00.655) 0:00:10.938 *********** 2025-05-06 00:38:17.766771 | orchestrator | changed: [testbed-manager] 2025-05-06 00:38:17.767065 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:38:17.768283 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:38:17.769003 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:38:17.770146 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:38:17.771300 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:38:17.772131 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:38:17.772576 | orchestrator | 2025-05-06 00:38:17.773577 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-06 00:38:17.774853 | orchestrator | Tuesday 06 May 2025 00:38:17 +0000 (0:00:01.682) 0:00:12.620 *********** 2025-05-06 00:38:19.381776 | orchestrator | changed: [testbed-manager] 2025-05-06 00:38:19.382846 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:38:19.383154 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:38:19.383797 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:38:19.384229 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:38:19.384536 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:38:19.385550 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:38:19.386124 | orchestrator | 2025-05-06 00:38:19.386200 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-06 00:38:19.386262 | orchestrator | Tuesday 06 May 2025 00:38:19 +0000 (0:00:01.612) 0:00:14.233 *********** 2025-05-06 00:38:20.843034 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:38:20.844847 | orchestrator | ok: [testbed-manager] 2025-05-06 00:38:20.846499 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:38:20.848984 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:38:20.849828 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:38:20.849862 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:38:20.850082 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:38:20.850756 | orchestrator | 2025-05-06 00:38:20.851443 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-06 00:38:20.852125 | orchestrator | Tuesday 06 May 2025 00:38:20 +0000 (0:00:01.461) 0:00:15.695 *********** 2025-05-06 00:38:22.576976 | orchestrator | changed: [testbed-manager] 2025-05-06 00:38:22.577653 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:38:22.577701 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:38:22.578267 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:38:22.578889 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:38:22.579345 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:38:22.579963 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:38:22.581197 | orchestrator | 2025-05-06 00:38:22.587159 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-06 00:38:22.741984 | orchestrator | Tuesday 06 May 2025 00:38:22 +0000 (0:00:01.735) 0:00:17.430 *********** 2025-05-06 00:38:22.742175 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:38:22.816115 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:38:22.891163 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:38:22.961821 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:38:23.188124 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:38:23.323690 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:38:23.323909 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:38:23.325022 | orchestrator | 2025-05-06 00:38:23.328086 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-06 00:38:23.328998 | orchestrator | 2025-05-06 00:38:23.330072 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-06 00:38:23.330597 | orchestrator | Tuesday 06 May 2025 00:38:23 +0000 (0:00:00.746) 0:00:18.176 *********** 2025-05-06 00:38:25.703364 | orchestrator | ok: [testbed-manager] 2025-05-06 00:38:25.703639 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:38:25.703945 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:38:25.704355 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:38:25.705779 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:38:25.706006 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:38:25.706603 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:38:25.708166 | orchestrator | 2025-05-06 00:38:25.709008 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:38:25.709059 | orchestrator | 2025-05-06 00:38:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-06 00:38:25.709243 | orchestrator | 2025-05-06 00:38:25 | INFO  | Please wait and do not abort execution. 2025-05-06 00:38:25.709491 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-06 00:38:25.710251 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:38:25.710784 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:38:25.711669 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:38:25.712315 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:38:25.712580 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:38:25.713251 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:38:25.713554 | orchestrator | 2025-05-06 00:38:25.713979 | orchestrator | Tuesday 06 May 2025 00:38:25 +0000 (0:00:02.381) 0:00:20.558 *********** 2025-05-06 00:38:25.714366 | orchestrator | =============================================================================== 2025-05-06 00:38:25.715066 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.59s 2025-05-06 00:38:25.715289 | orchestrator | Apply netplan configuration --------------------------------------------- 2.52s 2025-05-06 00:38:25.715671 | orchestrator | Install python3-docker -------------------------------------------------- 2.38s 2025-05-06 00:38:25.716101 | orchestrator | Apply netplan configuration --------------------------------------------- 1.80s 2025-05-06 00:38:25.716417 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.74s 2025-05-06 00:38:25.716896 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.68s 2025-05-06 00:38:25.717366 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.61s 2025-05-06 00:38:25.717734 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.46s 2025-05-06 00:38:25.718095 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.46s 2025-05-06 00:38:25.718814 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.78s 2025-05-06 00:38:25.719288 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.75s 2025-05-06 00:38:25.719771 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.66s 2025-05-06 00:38:26.206566 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-06 00:38:27.570740 | orchestrator | 2025-05-06 00:38:27 | INFO  | Task da4fc7ac-f464-4e68-b5c6-50d87f6e1c2d (reboot) was prepared for execution. 2025-05-06 00:38:30.637513 | orchestrator | 2025-05-06 00:38:27 | INFO  | It takes a moment until task da4fc7ac-f464-4e68-b5c6-50d87f6e1c2d (reboot) has been started and output is visible here. 2025-05-06 00:38:30.637666 | orchestrator | 2025-05-06 00:38:30.637957 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-06 00:38:30.638008 | orchestrator | 2025-05-06 00:38:30.640361 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-06 00:38:30.641547 | orchestrator | Tuesday 06 May 2025 00:38:30 +0000 (0:00:00.141) 0:00:00.141 *********** 2025-05-06 00:38:30.727798 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:38:30.728195 | orchestrator | 2025-05-06 00:38:30.729246 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-06 00:38:30.729960 | orchestrator | Tuesday 06 May 2025 00:38:30 +0000 (0:00:00.093) 0:00:00.235 *********** 2025-05-06 00:38:31.630824 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:38:31.631203 | orchestrator | 2025-05-06 00:38:31.632024 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-06 00:38:31.632595 | orchestrator | Tuesday 06 May 2025 00:38:31 +0000 (0:00:00.903) 0:00:01.138 *********** 2025-05-06 00:38:31.759689 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:38:31.760084 | orchestrator | 2025-05-06 00:38:31.760122 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-06 00:38:31.760886 | orchestrator | 2025-05-06 00:38:31.761909 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-06 00:38:31.762666 | orchestrator | Tuesday 06 May 2025 00:38:31 +0000 (0:00:00.126) 0:00:01.265 *********** 2025-05-06 00:38:31.848739 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:38:31.849343 | orchestrator | 2025-05-06 00:38:31.849940 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-06 00:38:31.850406 | orchestrator | Tuesday 06 May 2025 00:38:31 +0000 (0:00:00.091) 0:00:01.356 *********** 2025-05-06 00:38:32.541815 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:38:32.542084 | orchestrator | 2025-05-06 00:38:32.542620 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-06 00:38:32.543609 | orchestrator | Tuesday 06 May 2025 00:38:32 +0000 (0:00:00.691) 0:00:02.048 *********** 2025-05-06 00:38:32.643623 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:38:32.644203 | orchestrator | 2025-05-06 00:38:32.644642 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-06 00:38:32.644997 | orchestrator | 2025-05-06 00:38:32.645537 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-06 00:38:32.646139 | orchestrator | Tuesday 06 May 2025 00:38:32 +0000 (0:00:00.101) 0:00:02.149 *********** 2025-05-06 00:38:32.739543 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:38:32.739921 | orchestrator | 2025-05-06 00:38:32.740632 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-06 00:38:32.741112 | orchestrator | Tuesday 06 May 2025 00:38:32 +0000 (0:00:00.095) 0:00:02.245 *********** 2025-05-06 00:38:33.480938 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:38:33.481647 | orchestrator | 2025-05-06 00:38:33.482569 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-06 00:38:33.483363 | orchestrator | Tuesday 06 May 2025 00:38:33 +0000 (0:00:00.741) 0:00:02.987 *********** 2025-05-06 00:38:33.585671 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:38:33.586530 | orchestrator | 2025-05-06 00:38:33.587479 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-06 00:38:33.588750 | orchestrator | 2025-05-06 00:38:33.589698 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-06 00:38:33.590844 | orchestrator | Tuesday 06 May 2025 00:38:33 +0000 (0:00:00.104) 0:00:03.092 *********** 2025-05-06 00:38:33.685374 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:38:33.685661 | orchestrator | 2025-05-06 00:38:33.686351 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-06 00:38:33.687333 | orchestrator | Tuesday 06 May 2025 00:38:33 +0000 (0:00:00.099) 0:00:03.192 *********** 2025-05-06 00:38:34.372889 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:38:34.373252 | orchestrator | 2025-05-06 00:38:34.373309 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-06 00:38:34.373847 | orchestrator | Tuesday 06 May 2025 00:38:34 +0000 (0:00:00.687) 0:00:03.880 *********** 2025-05-06 00:38:34.481997 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:38:34.482170 | orchestrator | 2025-05-06 00:38:34.483620 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-06 00:38:34.485881 | orchestrator | 2025-05-06 00:38:34.486292 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-06 00:38:34.487079 | orchestrator | Tuesday 06 May 2025 00:38:34 +0000 (0:00:00.106) 0:00:03.987 *********** 2025-05-06 00:38:34.581923 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:38:34.583678 | orchestrator | 2025-05-06 00:38:34.584108 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-06 00:38:34.584173 | orchestrator | Tuesday 06 May 2025 00:38:34 +0000 (0:00:00.102) 0:00:04.089 *********** 2025-05-06 00:38:35.283957 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:38:35.284187 | orchestrator | 2025-05-06 00:38:35.285493 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-06 00:38:35.286299 | orchestrator | Tuesday 06 May 2025 00:38:35 +0000 (0:00:00.699) 0:00:04.789 *********** 2025-05-06 00:38:35.386286 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:38:35.387341 | orchestrator | 2025-05-06 00:38:35.389695 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-06 00:38:35.390333 | orchestrator | 2025-05-06 00:38:35.391846 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-06 00:38:35.474229 | orchestrator | Tuesday 06 May 2025 00:38:35 +0000 (0:00:00.102) 0:00:04.891 *********** 2025-05-06 00:38:35.474328 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:38:35.474831 | orchestrator | 2025-05-06 00:38:35.476100 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-06 00:38:35.477530 | orchestrator | Tuesday 06 May 2025 00:38:35 +0000 (0:00:00.089) 0:00:04.981 *********** 2025-05-06 00:38:36.111862 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:38:36.112721 | orchestrator | 2025-05-06 00:38:36.113860 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-06 00:38:36.114619 | orchestrator | Tuesday 06 May 2025 00:38:36 +0000 (0:00:00.636) 0:00:05.618 *********** 2025-05-06 00:38:36.141841 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:38:36.142253 | orchestrator | 2025-05-06 00:38:36.144261 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:38:36.144325 | orchestrator | 2025-05-06 00:38:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-06 00:38:36.148189 | orchestrator | 2025-05-06 00:38:36 | INFO  | Please wait and do not abort execution. 2025-05-06 00:38:36.148262 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:38:36.149340 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:38:36.149879 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:38:36.150136 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:38:36.150536 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:38:36.151092 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:38:36.151894 | orchestrator | 2025-05-06 00:38:36.152374 | orchestrator | Tuesday 06 May 2025 00:38:36 +0000 (0:00:00.032) 0:00:05.650 *********** 2025-05-06 00:38:36.153155 | orchestrator | =============================================================================== 2025-05-06 00:38:36.153278 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.36s 2025-05-06 00:38:36.153947 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.57s 2025-05-06 00:38:36.154465 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.57s 2025-05-06 00:38:36.583809 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-06 00:38:37.988360 | orchestrator | 2025-05-06 00:38:37 | INFO  | Task ce92cb1d-475a-4c78-81c1-1e26d9a3a7be (wait-for-connection) was prepared for execution. 2025-05-06 00:38:41.000941 | orchestrator | 2025-05-06 00:38:37 | INFO  | It takes a moment until task ce92cb1d-475a-4c78-81c1-1e26d9a3a7be (wait-for-connection) has been started and output is visible here. 2025-05-06 00:38:41.001086 | orchestrator | 2025-05-06 00:38:41.001160 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-06 00:38:41.002355 | orchestrator | 2025-05-06 00:38:41.005184 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-06 00:38:41.007727 | orchestrator | Tuesday 06 May 2025 00:38:40 +0000 (0:00:00.164) 0:00:00.164 *********** 2025-05-06 00:38:54.782636 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:38:54.782831 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:38:54.782855 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:38:54.782870 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:38:54.782885 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:38:54.782899 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:38:54.782916 | orchestrator | 2025-05-06 00:38:54.782943 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:38:54.782995 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:38:54.783014 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:38:54.783044 | orchestrator | 2025-05-06 00:38:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-06 00:38:54.783121 | orchestrator | 2025-05-06 00:38:54 | INFO  | Please wait and do not abort execution. 2025-05-06 00:38:54.783145 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:38:54.784725 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:38:54.787371 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:38:54.787505 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:38:54.787525 | orchestrator | 2025-05-06 00:38:54.787540 | orchestrator | Tuesday 06 May 2025 00:38:54 +0000 (0:00:13.780) 0:00:13.945 *********** 2025-05-06 00:38:54.787555 | orchestrator | =============================================================================== 2025-05-06 00:38:54.787574 | orchestrator | Wait until remote system is reachable ---------------------------------- 13.78s 2025-05-06 00:38:55.234844 | orchestrator | + osism apply hddtemp 2025-05-06 00:38:56.774457 | orchestrator | 2025-05-06 00:38:56 | INFO  | Task 5dfe6a05-b00f-43a8-b304-38d2d091e7ba (hddtemp) was prepared for execution. 2025-05-06 00:39:00.136888 | orchestrator | 2025-05-06 00:38:56 | INFO  | It takes a moment until task 5dfe6a05-b00f-43a8-b304-38d2d091e7ba (hddtemp) has been started and output is visible here. 2025-05-06 00:39:00.137034 | orchestrator | 2025-05-06 00:39:00.138571 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-06 00:39:00.138781 | orchestrator | 2025-05-06 00:39:00.139414 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-06 00:39:00.139855 | orchestrator | Tuesday 06 May 2025 00:39:00 +0000 (0:00:00.192) 0:00:00.192 *********** 2025-05-06 00:39:00.278277 | orchestrator | ok: [testbed-manager] 2025-05-06 00:39:00.350237 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:39:00.421975 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:39:00.495891 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:39:00.567215 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:39:00.792802 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:39:00.793546 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:39:00.794534 | orchestrator | 2025-05-06 00:39:00.795124 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-06 00:39:00.798590 | orchestrator | Tuesday 06 May 2025 00:39:00 +0000 (0:00:00.659) 0:00:00.851 *********** 2025-05-06 00:39:01.915932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:39:01.916630 | orchestrator | 2025-05-06 00:39:01.917046 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-06 00:39:01.917543 | orchestrator | Tuesday 06 May 2025 00:39:01 +0000 (0:00:01.117) 0:00:01.969 *********** 2025-05-06 00:39:03.834089 | orchestrator | ok: [testbed-manager] 2025-05-06 00:39:03.835188 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:39:03.835395 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:39:03.836512 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:39:03.838649 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:39:03.839221 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:39:03.839233 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:39:03.839242 | orchestrator | 2025-05-06 00:39:03.840051 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-06 00:39:03.840839 | orchestrator | Tuesday 06 May 2025 00:39:03 +0000 (0:00:01.923) 0:00:03.892 *********** 2025-05-06 00:39:04.499011 | orchestrator | changed: [testbed-manager] 2025-05-06 00:39:04.586427 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:39:05.022646 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:39:05.023485 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:39:05.024483 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:39:05.025456 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:39:05.027515 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:39:05.028140 | orchestrator | 2025-05-06 00:39:05.028210 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-06 00:39:05.028706 | orchestrator | Tuesday 06 May 2025 00:39:05 +0000 (0:00:01.187) 0:00:05.080 *********** 2025-05-06 00:39:06.306585 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:39:06.307806 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:39:06.307862 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:39:06.309761 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:39:06.312491 | orchestrator | ok: [testbed-manager] 2025-05-06 00:39:06.312995 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:39:06.313022 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:39:06.313042 | orchestrator | 2025-05-06 00:39:06.313961 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-06 00:39:06.315169 | orchestrator | Tuesday 06 May 2025 00:39:06 +0000 (0:00:01.283) 0:00:06.363 *********** 2025-05-06 00:39:06.554627 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:39:06.633813 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:39:06.716962 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:39:06.803222 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:39:06.923191 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:39:06.924214 | orchestrator | changed: [testbed-manager] 2025-05-06 00:39:06.925244 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:39:06.928662 | orchestrator | 2025-05-06 00:39:19.936083 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-06 00:39:19.936222 | orchestrator | Tuesday 06 May 2025 00:39:06 +0000 (0:00:00.621) 0:00:06.984 *********** 2025-05-06 00:39:19.936256 | orchestrator | changed: [testbed-manager] 2025-05-06 00:39:19.937672 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:39:19.937704 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:39:19.937936 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:39:19.937959 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:39:19.937978 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:39:19.939485 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:39:19.939923 | orchestrator | 2025-05-06 00:39:19.941001 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-06 00:39:19.941242 | orchestrator | Tuesday 06 May 2025 00:39:19 +0000 (0:00:13.004) 0:00:19.989 *********** 2025-05-06 00:39:21.122395 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:39:21.124239 | orchestrator | 2025-05-06 00:39:21.125005 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-06 00:39:21.125058 | orchestrator | Tuesday 06 May 2025 00:39:21 +0000 (0:00:01.188) 0:00:21.177 *********** 2025-05-06 00:39:22.911894 | orchestrator | changed: [testbed-manager] 2025-05-06 00:39:22.913230 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:39:22.914241 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:39:22.915729 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:39:22.917220 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:39:22.917271 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:39:22.918434 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:39:22.919265 | orchestrator | 2025-05-06 00:39:22.920521 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:39:22.920673 | orchestrator | 2025-05-06 00:39:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-06 00:39:22.922318 | orchestrator | 2025-05-06 00:39:22 | INFO  | Please wait and do not abort execution. 2025-05-06 00:39:22.922389 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:39:22.923348 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-06 00:39:22.923393 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-06 00:39:22.923872 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-06 00:39:22.924753 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-06 00:39:22.925408 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-06 00:39:22.926394 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-06 00:39:22.926854 | orchestrator | 2025-05-06 00:39:22.927654 | orchestrator | Tuesday 06 May 2025 00:39:22 +0000 (0:00:01.794) 0:00:22.972 *********** 2025-05-06 00:39:22.928135 | orchestrator | =============================================================================== 2025-05-06 00:39:22.929505 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.00s 2025-05-06 00:39:22.929719 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.92s 2025-05-06 00:39:22.930357 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.79s 2025-05-06 00:39:22.931020 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.28s 2025-05-06 00:39:22.931724 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.19s 2025-05-06 00:39:22.932585 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.19s 2025-05-06 00:39:22.933482 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.12s 2025-05-06 00:39:22.933648 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.66s 2025-05-06 00:39:22.934123 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.62s 2025-05-06 00:39:23.442888 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-06 00:39:24.783830 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-06 00:39:24.784012 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-06 00:39:24.784310 | orchestrator | + local max_attempts=60 2025-05-06 00:39:24.784385 | orchestrator | + local name=ceph-ansible 2025-05-06 00:39:24.784401 | orchestrator | + local attempt_num=1 2025-05-06 00:39:24.784422 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-06 00:39:24.820972 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-06 00:39:24.821855 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-06 00:39:24.821902 | orchestrator | + local max_attempts=60 2025-05-06 00:39:24.821914 | orchestrator | + local name=kolla-ansible 2025-05-06 00:39:24.821924 | orchestrator | + local attempt_num=1 2025-05-06 00:39:24.821942 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-06 00:39:24.854548 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-06 00:39:24.854778 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-06 00:39:24.854801 | orchestrator | + local max_attempts=60 2025-05-06 00:39:24.854810 | orchestrator | + local name=osism-ansible 2025-05-06 00:39:24.854819 | orchestrator | + local attempt_num=1 2025-05-06 00:39:24.854832 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-06 00:39:24.881565 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-06 00:39:25.057815 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-06 00:39:25.058542 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-06 00:39:25.058571 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-06 00:39:25.216089 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-06 00:39:25.392411 | orchestrator | ARA in osism-ansible already disabled. 2025-05-06 00:39:25.563743 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-06 00:39:25.564594 | orchestrator | + osism apply gather-facts 2025-05-06 00:39:27.160416 | orchestrator | 2025-05-06 00:39:27 | INFO  | Task b2abd803-ba39-4f4f-86c6-e4567751da4e (gather-facts) was prepared for execution. 2025-05-06 00:39:30.166960 | orchestrator | 2025-05-06 00:39:27 | INFO  | It takes a moment until task b2abd803-ba39-4f4f-86c6-e4567751da4e (gather-facts) has been started and output is visible here. 2025-05-06 00:39:30.167906 | orchestrator | 2025-05-06 00:39:30.169351 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-06 00:39:30.170960 | orchestrator | 2025-05-06 00:39:30.171235 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-06 00:39:30.172364 | orchestrator | Tuesday 06 May 2025 00:39:30 +0000 (0:00:00.155) 0:00:00.155 *********** 2025-05-06 00:39:35.208758 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:39:35.209777 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:39:35.209850 | orchestrator | ok: [testbed-manager] 2025-05-06 00:39:35.210212 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:39:35.211050 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:39:35.211324 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:39:35.212967 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:39:35.213173 | orchestrator | 2025-05-06 00:39:35.213482 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-06 00:39:35.214218 | orchestrator | 2025-05-06 00:39:35.214916 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-06 00:39:35.214974 | orchestrator | Tuesday 06 May 2025 00:39:35 +0000 (0:00:05.052) 0:00:05.208 *********** 2025-05-06 00:39:35.363385 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:39:35.431760 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:39:35.507688 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:39:35.581034 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:39:35.654363 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:39:35.691941 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:39:35.692406 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:39:35.693201 | orchestrator | 2025-05-06 00:39:35.694463 | orchestrator | 2025-05-06 00:39:35 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-06 00:39:35.694582 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:39:35.694666 | orchestrator | 2025-05-06 00:39:35 | INFO  | Please wait and do not abort execution. 2025-05-06 00:39:35.695714 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-06 00:39:35.696968 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-06 00:39:35.697667 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-06 00:39:35.698357 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-06 00:39:35.699177 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-06 00:39:35.699555 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-06 00:39:35.700031 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-06 00:39:35.700604 | orchestrator | 2025-05-06 00:39:35.701081 | orchestrator | Tuesday 06 May 2025 00:39:35 +0000 (0:00:00.484) 0:00:05.692 *********** 2025-05-06 00:39:35.701638 | orchestrator | =============================================================================== 2025-05-06 00:39:35.702173 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.05s 2025-05-06 00:39:35.702686 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.48s 2025-05-06 00:39:36.302130 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-06 00:39:36.312672 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-06 00:39:36.323699 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-06 00:39:36.334458 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-06 00:39:36.352815 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-06 00:39:36.363215 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-06 00:39:36.374472 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-06 00:39:36.390968 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-06 00:39:36.401509 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-06 00:39:36.418747 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-06 00:39:36.429615 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-06 00:39:36.443452 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-06 00:39:36.457661 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-06 00:39:36.476642 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-06 00:39:36.495872 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-06 00:39:36.513542 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-06 00:39:36.530564 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-06 00:39:36.548637 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-06 00:39:36.565884 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-06 00:39:36.579757 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-06 00:39:36.592747 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-06 00:39:36.844066 | orchestrator | changed 2025-05-06 00:39:36.909602 | 2025-05-06 00:39:36.909734 | TASK [Deploy services] 2025-05-06 00:39:37.070153 | orchestrator | skipping: Conditional result was False 2025-05-06 00:39:37.091953 | 2025-05-06 00:39:37.092137 | TASK [Deploy in a nutshell] 2025-05-06 00:39:37.858666 | orchestrator | 2025-05-06 00:39:37.913233 | orchestrator | # PULL IMAGES 2025-05-06 00:39:37.913377 | orchestrator | 2025-05-06 00:39:37.913400 | orchestrator | + set -e 2025-05-06 00:39:37.913472 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-06 00:39:37.913498 | orchestrator | ++ export INTERACTIVE=false 2025-05-06 00:39:37.913515 | orchestrator | ++ INTERACTIVE=false 2025-05-06 00:39:37.913538 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-06 00:39:37.913564 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-06 00:39:37.913580 | orchestrator | + source /opt/manager-vars.sh 2025-05-06 00:39:37.913594 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-06 00:39:37.913608 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-06 00:39:37.913622 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-06 00:39:37.913637 | orchestrator | ++ CEPH_VERSION=reef 2025-05-06 00:39:37.913651 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-06 00:39:37.913665 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-06 00:39:37.913680 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-06 00:39:37.913694 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-06 00:39:37.913709 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-06 00:39:37.913723 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-06 00:39:37.913737 | orchestrator | ++ export ARA=false 2025-05-06 00:39:37.913751 | orchestrator | ++ ARA=false 2025-05-06 00:39:37.913765 | orchestrator | ++ export TEMPEST=false 2025-05-06 00:39:37.913778 | orchestrator | ++ TEMPEST=false 2025-05-06 00:39:37.913793 | orchestrator | ++ export IS_ZUUL=true 2025-05-06 00:39:37.913807 | orchestrator | ++ IS_ZUUL=true 2025-05-06 00:39:37.913821 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.79 2025-05-06 00:39:37.913835 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.79 2025-05-06 00:39:37.913849 | orchestrator | ++ export EXTERNAL_API=false 2025-05-06 00:39:37.913863 | orchestrator | ++ EXTERNAL_API=false 2025-05-06 00:39:37.913877 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-06 00:39:37.913891 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-06 00:39:37.913913 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-06 00:39:37.913927 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-06 00:39:37.913945 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-06 00:39:37.913959 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-06 00:39:37.913973 | orchestrator | + echo 2025-05-06 00:39:37.913987 | orchestrator | + echo '# PULL IMAGES' 2025-05-06 00:39:37.914001 | orchestrator | + echo 2025-05-06 00:39:37.914049 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-06 00:39:37.914094 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-06 00:39:39.266339 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-06 00:39:39.266511 | orchestrator | 2025-05-06 00:39:39 | INFO  | Trying to run play pull-images in environment custom 2025-05-06 00:39:39.312487 | orchestrator | 2025-05-06 00:39:39 | INFO  | Task 05529aa5-3616-43a1-822d-a42711dc1998 (pull-images) was prepared for execution. 2025-05-06 00:39:42.278995 | orchestrator | 2025-05-06 00:39:39 | INFO  | It takes a moment until task 05529aa5-3616-43a1-822d-a42711dc1998 (pull-images) has been started and output is visible here. 2025-05-06 00:39:42.279155 | orchestrator | 2025-05-06 00:39:42.279360 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-06 00:39:42.280359 | orchestrator | 2025-05-06 00:39:42.280955 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-06 00:39:42.281696 | orchestrator | Tuesday 06 May 2025 00:39:42 +0000 (0:00:00.148) 0:00:00.148 *********** 2025-05-06 00:40:20.020803 | orchestrator | changed: [testbed-manager] 2025-05-06 00:40:20.021506 | orchestrator | 2025-05-06 00:40:20.021567 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-06 00:41:03.891305 | orchestrator | Tuesday 06 May 2025 00:40:20 +0000 (0:00:37.742) 0:00:37.890 *********** 2025-05-06 00:41:03.891503 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-06 00:41:03.891926 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-06 00:41:03.891968 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-06 00:41:03.892896 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-06 00:41:03.893993 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-06 00:41:03.896966 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-06 00:41:03.897740 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-06 00:41:03.899278 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-06 00:41:03.899746 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-06 00:41:03.902934 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-06 00:41:03.904234 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-06 00:41:03.905463 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-06 00:41:03.906783 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-06 00:41:03.908245 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-06 00:41:03.909216 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-06 00:41:03.910385 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-06 00:41:03.911574 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-06 00:41:03.912721 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-06 00:41:03.915475 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-06 00:41:03.915763 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-06 00:41:03.915787 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-06 00:41:03.915803 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-06 00:41:03.915825 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-06 00:41:03.916655 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-06 00:41:03.917297 | orchestrator | 2025-05-06 00:41:03.918187 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:41:03.918671 | orchestrator | 2025-05-06 00:41:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-06 00:41:03.918982 | orchestrator | 2025-05-06 00:41:03 | INFO  | Please wait and do not abort execution. 2025-05-06 00:41:03.919937 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:41:03.920673 | orchestrator | 2025-05-06 00:41:03.921380 | orchestrator | Tuesday 06 May 2025 00:41:03 +0000 (0:00:43.868) 0:01:21.759 *********** 2025-05-06 00:41:03.922094 | orchestrator | =============================================================================== 2025-05-06 00:41:03.922666 | orchestrator | Pull other images ------------------------------------------------------ 43.87s 2025-05-06 00:41:03.923276 | orchestrator | Pull keystone image ---------------------------------------------------- 37.74s 2025-05-06 00:41:05.538118 | orchestrator | 2025-05-06 00:41:05 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-06 00:41:05.578897 | orchestrator | 2025-05-06 00:41:05 | INFO  | Task 13d626bd-229f-4d87-95ba-a6f77bdc4c5b (wipe-partitions) was prepared for execution. 2025-05-06 00:41:08.185590 | orchestrator | 2025-05-06 00:41:05 | INFO  | It takes a moment until task 13d626bd-229f-4d87-95ba-a6f77bdc4c5b (wipe-partitions) has been started and output is visible here. 2025-05-06 00:41:08.185711 | orchestrator | 2025-05-06 00:41:08.186113 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-06 00:41:08.186278 | orchestrator | 2025-05-06 00:41:08.186332 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-06 00:41:08.751435 | orchestrator | Tuesday 06 May 2025 00:41:08 +0000 (0:00:00.114) 0:00:00.114 *********** 2025-05-06 00:41:08.751573 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:41:08.754386 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:41:08.754900 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:41:08.754927 | orchestrator | 2025-05-06 00:41:08.754948 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-06 00:41:08.755937 | orchestrator | Tuesday 06 May 2025 00:41:08 +0000 (0:00:00.570) 0:00:00.685 *********** 2025-05-06 00:41:08.889483 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:08.964977 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:08.966453 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:41:08.968181 | orchestrator | 2025-05-06 00:41:08.971846 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-06 00:41:09.592431 | orchestrator | Tuesday 06 May 2025 00:41:08 +0000 (0:00:00.212) 0:00:00.897 *********** 2025-05-06 00:41:09.592559 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:41:09.593067 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:41:09.593546 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:41:09.593577 | orchestrator | 2025-05-06 00:41:09.594483 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-06 00:41:09.595375 | orchestrator | Tuesday 06 May 2025 00:41:09 +0000 (0:00:00.626) 0:00:01.524 *********** 2025-05-06 00:41:09.730316 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:09.818976 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:09.822186 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:41:09.822400 | orchestrator | 2025-05-06 00:41:09.822439 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-06 00:41:09.823188 | orchestrator | Tuesday 06 May 2025 00:41:09 +0000 (0:00:00.229) 0:00:01.754 *********** 2025-05-06 00:41:10.978601 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-06 00:41:10.979007 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-06 00:41:10.980462 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-06 00:41:10.981399 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-06 00:41:10.982196 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-06 00:41:10.983254 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-06 00:41:10.984910 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-06 00:41:10.985686 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-06 00:41:10.987268 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-06 00:41:10.987737 | orchestrator | 2025-05-06 00:41:10.988029 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-06 00:41:10.991292 | orchestrator | Tuesday 06 May 2025 00:41:10 +0000 (0:00:01.156) 0:00:02.910 *********** 2025-05-06 00:41:12.302240 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-06 00:41:12.303094 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-06 00:41:12.304525 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-06 00:41:12.305715 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-06 00:41:12.307537 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-06 00:41:12.308073 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-06 00:41:12.308504 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-06 00:41:12.310238 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-06 00:41:12.311277 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-06 00:41:12.311305 | orchestrator | 2025-05-06 00:41:12.311326 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-06 00:41:15.219441 | orchestrator | Tuesday 06 May 2025 00:41:12 +0000 (0:00:01.325) 0:00:04.236 *********** 2025-05-06 00:41:15.219614 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-06 00:41:15.219902 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-06 00:41:15.220069 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-06 00:41:15.220179 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-06 00:41:15.220505 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-06 00:41:15.220655 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-06 00:41:15.221178 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-06 00:41:15.221394 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-06 00:41:15.221917 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-06 00:41:15.222122 | orchestrator | 2025-05-06 00:41:15.222536 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-06 00:41:15.224302 | orchestrator | Tuesday 06 May 2025 00:41:15 +0000 (0:00:02.913) 0:00:07.149 *********** 2025-05-06 00:41:15.817927 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:41:15.818583 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:41:15.819794 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:41:15.820582 | orchestrator | 2025-05-06 00:41:15.820622 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-06 00:41:15.821279 | orchestrator | Tuesday 06 May 2025 00:41:15 +0000 (0:00:00.601) 0:00:07.750 *********** 2025-05-06 00:41:16.438258 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:41:16.438859 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:41:16.439115 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:41:16.439589 | orchestrator | 2025-05-06 00:41:16.440434 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:41:16.440687 | orchestrator | 2025-05-06 00:41:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-06 00:41:16.440714 | orchestrator | 2025-05-06 00:41:16 | INFO  | Please wait and do not abort execution. 2025-05-06 00:41:16.440736 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:41:16.441044 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:41:16.441509 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:41:16.441740 | orchestrator | 2025-05-06 00:41:16.442004 | orchestrator | Tuesday 06 May 2025 00:41:16 +0000 (0:00:00.619) 0:00:08.369 *********** 2025-05-06 00:41:16.442268 | orchestrator | =============================================================================== 2025-05-06 00:41:16.442491 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.91s 2025-05-06 00:41:16.442779 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.33s 2025-05-06 00:41:16.445115 | orchestrator | Check device availability ----------------------------------------------- 1.16s 2025-05-06 00:41:18.396978 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.63s 2025-05-06 00:41:18.397040 | orchestrator | Request device events from the kernel ----------------------------------- 0.62s 2025-05-06 00:41:18.397047 | orchestrator | Reload udev rules ------------------------------------------------------- 0.60s 2025-05-06 00:41:18.397054 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.57s 2025-05-06 00:41:18.397060 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2025-05-06 00:41:18.397066 | orchestrator | Remove all rook related logical devices --------------------------------- 0.21s 2025-05-06 00:41:18.397082 | orchestrator | 2025-05-06 00:41:18 | INFO  | Task 8e3fcf7b-368f-4d77-832f-19062e6a70f8 (facts) was prepared for execution. 2025-05-06 00:41:21.084976 | orchestrator | 2025-05-06 00:41:18 | INFO  | It takes a moment until task 8e3fcf7b-368f-4d77-832f-19062e6a70f8 (facts) has been started and output is visible here. 2025-05-06 00:41:21.085103 | orchestrator | 2025-05-06 00:41:21.086234 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-06 00:41:21.088418 | orchestrator | 2025-05-06 00:41:21.088450 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-06 00:41:21.088471 | orchestrator | Tuesday 06 May 2025 00:41:21 +0000 (0:00:00.146) 0:00:00.146 *********** 2025-05-06 00:41:21.988052 | orchestrator | ok: [testbed-manager] 2025-05-06 00:41:21.990173 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:41:21.990307 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:41:21.990337 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:41:21.991717 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:41:21.992448 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:41:21.993669 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:41:21.995072 | orchestrator | 2025-05-06 00:41:21.996244 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-06 00:41:21.996577 | orchestrator | Tuesday 06 May 2025 00:41:21 +0000 (0:00:00.902) 0:00:01.048 *********** 2025-05-06 00:41:22.132660 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:41:22.223296 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:41:22.282320 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:41:22.342283 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:41:22.418317 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:23.136345 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:23.136612 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:41:23.137025 | orchestrator | 2025-05-06 00:41:23.137773 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-06 00:41:23.139776 | orchestrator | 2025-05-06 00:41:23.140501 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-06 00:41:23.140650 | orchestrator | Tuesday 06 May 2025 00:41:23 +0000 (0:00:01.150) 0:00:02.198 *********** 2025-05-06 00:41:27.774628 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:41:27.775380 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:41:27.775874 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:41:27.776327 | orchestrator | ok: [testbed-manager] 2025-05-06 00:41:27.776874 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:41:27.779709 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:41:27.780463 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:41:27.780976 | orchestrator | 2025-05-06 00:41:27.781317 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-06 00:41:27.781525 | orchestrator | 2025-05-06 00:41:27.782242 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-06 00:41:27.782893 | orchestrator | Tuesday 06 May 2025 00:41:27 +0000 (0:00:04.638) 0:00:06.837 *********** 2025-05-06 00:41:28.059210 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:41:28.143702 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:41:28.219061 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:41:28.302331 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:41:28.380809 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:28.416631 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:28.418307 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:41:28.419413 | orchestrator | 2025-05-06 00:41:28.420797 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:41:28.421551 | orchestrator | 2025-05-06 00:41:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-06 00:41:28.423158 | orchestrator | 2025-05-06 00:41:28 | INFO  | Please wait and do not abort execution. 2025-05-06 00:41:28.423281 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:41:28.424207 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:41:28.424926 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:41:28.426642 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:41:28.427872 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:41:28.428850 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:41:28.430206 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:41:28.430856 | orchestrator | 2025-05-06 00:41:28.431970 | orchestrator | Tuesday 06 May 2025 00:41:28 +0000 (0:00:00.642) 0:00:07.479 *********** 2025-05-06 00:41:28.433415 | orchestrator | =============================================================================== 2025-05-06 00:41:28.434927 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.64s 2025-05-06 00:41:28.436340 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.15s 2025-05-06 00:41:28.437213 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.90s 2025-05-06 00:41:28.439842 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.64s 2025-05-06 00:41:30.375693 | orchestrator | 2025-05-06 00:41:30 | INFO  | Task 52a69dbe-d5e4-4e4e-a9cb-38b334dab5ae (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-06 00:41:33.672302 | orchestrator | 2025-05-06 00:41:30 | INFO  | It takes a moment until task 52a69dbe-d5e4-4e4e-a9cb-38b334dab5ae (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-06 00:41:33.672459 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-06 00:41:34.357380 | orchestrator | 2025-05-06 00:41:34.358486 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-06 00:41:34.358892 | orchestrator | 2025-05-06 00:41:34.361419 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-06 00:41:34.361885 | orchestrator | Tuesday 06 May 2025 00:41:34 +0000 (0:00:00.589) 0:00:00.589 *********** 2025-05-06 00:41:34.651530 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-06 00:41:34.653401 | orchestrator | 2025-05-06 00:41:34.653857 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-06 00:41:34.654519 | orchestrator | Tuesday 06 May 2025 00:41:34 +0000 (0:00:00.298) 0:00:00.887 *********** 2025-05-06 00:41:34.900177 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:41:34.901776 | orchestrator | 2025-05-06 00:41:34.902100 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:34.903582 | orchestrator | Tuesday 06 May 2025 00:41:34 +0000 (0:00:00.247) 0:00:01.135 *********** 2025-05-06 00:41:35.365007 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-06 00:41:35.365354 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-06 00:41:35.365394 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-06 00:41:35.365416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-06 00:41:35.365609 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-06 00:41:35.366291 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-06 00:41:35.366575 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-06 00:41:35.369785 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-06 00:41:35.370111 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-06 00:41:35.370499 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-06 00:41:35.370857 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-06 00:41:35.370886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-06 00:41:35.371100 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-06 00:41:35.371410 | orchestrator | 2025-05-06 00:41:35.371654 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:35.372095 | orchestrator | Tuesday 06 May 2025 00:41:35 +0000 (0:00:00.467) 0:00:01.603 *********** 2025-05-06 00:41:35.541434 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:35.544442 | orchestrator | 2025-05-06 00:41:35.544556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:35.544963 | orchestrator | Tuesday 06 May 2025 00:41:35 +0000 (0:00:00.174) 0:00:01.777 *********** 2025-05-06 00:41:35.709419 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:35.862736 | orchestrator | 2025-05-06 00:41:35.862870 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:35.862893 | orchestrator | Tuesday 06 May 2025 00:41:35 +0000 (0:00:00.168) 0:00:01.946 *********** 2025-05-06 00:41:35.862925 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:35.863459 | orchestrator | 2025-05-06 00:41:35.864506 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:35.866378 | orchestrator | Tuesday 06 May 2025 00:41:35 +0000 (0:00:00.155) 0:00:02.101 *********** 2025-05-06 00:41:36.015601 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:36.016287 | orchestrator | 2025-05-06 00:41:36.017343 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:36.018371 | orchestrator | Tuesday 06 May 2025 00:41:36 +0000 (0:00:00.152) 0:00:02.253 *********** 2025-05-06 00:41:36.193446 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:36.193832 | orchestrator | 2025-05-06 00:41:36.193870 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:36.193926 | orchestrator | Tuesday 06 May 2025 00:41:36 +0000 (0:00:00.177) 0:00:02.431 *********** 2025-05-06 00:41:36.388919 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:36.389309 | orchestrator | 2025-05-06 00:41:36.390063 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:36.393493 | orchestrator | Tuesday 06 May 2025 00:41:36 +0000 (0:00:00.195) 0:00:02.626 *********** 2025-05-06 00:41:36.626582 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:36.631313 | orchestrator | 2025-05-06 00:41:36.906244 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:36.906345 | orchestrator | Tuesday 06 May 2025 00:41:36 +0000 (0:00:00.236) 0:00:02.862 *********** 2025-05-06 00:41:36.906379 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:36.907302 | orchestrator | 2025-05-06 00:41:36.907667 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:36.908148 | orchestrator | Tuesday 06 May 2025 00:41:36 +0000 (0:00:00.281) 0:00:03.144 *********** 2025-05-06 00:41:37.406857 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0) 2025-05-06 00:41:37.407158 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0) 2025-05-06 00:41:37.408024 | orchestrator | 2025-05-06 00:41:37.408232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:37.409243 | orchestrator | Tuesday 06 May 2025 00:41:37 +0000 (0:00:00.499) 0:00:03.643 *********** 2025-05-06 00:41:37.955222 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8c0721df-98b6-45a8-8372-f184b99eacbe) 2025-05-06 00:41:37.956077 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8c0721df-98b6-45a8-8372-f184b99eacbe) 2025-05-06 00:41:37.959292 | orchestrator | 2025-05-06 00:41:37.960061 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:37.960756 | orchestrator | Tuesday 06 May 2025 00:41:37 +0000 (0:00:00.547) 0:00:04.191 *********** 2025-05-06 00:41:38.360715 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cc7f276d-c2ba-4b91-9f6b-a505ec6ab98a) 2025-05-06 00:41:38.361170 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cc7f276d-c2ba-4b91-9f6b-a505ec6ab98a) 2025-05-06 00:41:38.361718 | orchestrator | 2025-05-06 00:41:38.361740 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:38.362190 | orchestrator | Tuesday 06 May 2025 00:41:38 +0000 (0:00:00.407) 0:00:04.599 *********** 2025-05-06 00:41:38.735685 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7e976783-2213-433c-91fb-66c729e68827) 2025-05-06 00:41:38.736741 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7e976783-2213-433c-91fb-66c729e68827) 2025-05-06 00:41:38.738171 | orchestrator | 2025-05-06 00:41:38.739220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:38.740187 | orchestrator | Tuesday 06 May 2025 00:41:38 +0000 (0:00:00.371) 0:00:04.970 *********** 2025-05-06 00:41:39.035393 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-06 00:41:39.035814 | orchestrator | 2025-05-06 00:41:39.035851 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:39.036283 | orchestrator | Tuesday 06 May 2025 00:41:39 +0000 (0:00:00.301) 0:00:05.271 *********** 2025-05-06 00:41:39.428765 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-06 00:41:39.429146 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-06 00:41:39.429900 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-06 00:41:39.433990 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-06 00:41:39.434258 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-06 00:41:39.434895 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-06 00:41:39.435702 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-06 00:41:39.436492 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-06 00:41:39.436989 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-06 00:41:39.437436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-06 00:41:39.438001 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-06 00:41:39.438654 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-06 00:41:39.438693 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-06 00:41:39.439047 | orchestrator | 2025-05-06 00:41:39.439557 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:39.440152 | orchestrator | Tuesday 06 May 2025 00:41:39 +0000 (0:00:00.393) 0:00:05.665 *********** 2025-05-06 00:41:39.606990 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:39.608506 | orchestrator | 2025-05-06 00:41:39.608596 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:39.608623 | orchestrator | Tuesday 06 May 2025 00:41:39 +0000 (0:00:00.178) 0:00:05.843 *********** 2025-05-06 00:41:39.777250 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:39.778750 | orchestrator | 2025-05-06 00:41:39.781808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:39.782267 | orchestrator | Tuesday 06 May 2025 00:41:39 +0000 (0:00:00.171) 0:00:06.014 *********** 2025-05-06 00:41:39.954542 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:39.954676 | orchestrator | 2025-05-06 00:41:39.955383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:39.959547 | orchestrator | Tuesday 06 May 2025 00:41:39 +0000 (0:00:00.177) 0:00:06.192 *********** 2025-05-06 00:41:40.141634 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:40.141843 | orchestrator | 2025-05-06 00:41:40.142190 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:40.142745 | orchestrator | Tuesday 06 May 2025 00:41:40 +0000 (0:00:00.184) 0:00:06.376 *********** 2025-05-06 00:41:40.303696 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:40.304387 | orchestrator | 2025-05-06 00:41:40.304450 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:40.304676 | orchestrator | Tuesday 06 May 2025 00:41:40 +0000 (0:00:00.163) 0:00:06.540 *********** 2025-05-06 00:41:40.613741 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:40.613946 | orchestrator | 2025-05-06 00:41:40.616438 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:40.616914 | orchestrator | Tuesday 06 May 2025 00:41:40 +0000 (0:00:00.309) 0:00:06.849 *********** 2025-05-06 00:41:40.783002 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:40.783252 | orchestrator | 2025-05-06 00:41:40.785580 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:40.789325 | orchestrator | Tuesday 06 May 2025 00:41:40 +0000 (0:00:00.170) 0:00:07.020 *********** 2025-05-06 00:41:40.966532 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:40.968331 | orchestrator | 2025-05-06 00:41:40.972094 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:40.974515 | orchestrator | Tuesday 06 May 2025 00:41:40 +0000 (0:00:00.183) 0:00:07.204 *********** 2025-05-06 00:41:41.544544 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-06 00:41:41.546446 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-06 00:41:41.546962 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-06 00:41:41.547210 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-06 00:41:41.547705 | orchestrator | 2025-05-06 00:41:41.548037 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:41.548070 | orchestrator | Tuesday 06 May 2025 00:41:41 +0000 (0:00:00.576) 0:00:07.781 *********** 2025-05-06 00:41:41.728482 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:41.728690 | orchestrator | 2025-05-06 00:41:41.729792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:41.730095 | orchestrator | Tuesday 06 May 2025 00:41:41 +0000 (0:00:00.184) 0:00:07.965 *********** 2025-05-06 00:41:41.904880 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:41.905027 | orchestrator | 2025-05-06 00:41:41.905055 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:41.905265 | orchestrator | Tuesday 06 May 2025 00:41:41 +0000 (0:00:00.175) 0:00:08.140 *********** 2025-05-06 00:41:42.096564 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:42.099136 | orchestrator | 2025-05-06 00:41:42.099227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:42.099516 | orchestrator | Tuesday 06 May 2025 00:41:42 +0000 (0:00:00.192) 0:00:08.333 *********** 2025-05-06 00:41:42.286856 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:42.288545 | orchestrator | 2025-05-06 00:41:42.289668 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-06 00:41:42.292628 | orchestrator | Tuesday 06 May 2025 00:41:42 +0000 (0:00:00.190) 0:00:08.523 *********** 2025-05-06 00:41:42.472413 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-06 00:41:42.472612 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-06 00:41:42.472642 | orchestrator | 2025-05-06 00:41:42.472716 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-06 00:41:42.472971 | orchestrator | Tuesday 06 May 2025 00:41:42 +0000 (0:00:00.183) 0:00:08.706 *********** 2025-05-06 00:41:42.598883 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:42.602392 | orchestrator | 2025-05-06 00:41:42.905957 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-06 00:41:42.906202 | orchestrator | Tuesday 06 May 2025 00:41:42 +0000 (0:00:00.126) 0:00:08.833 *********** 2025-05-06 00:41:42.906241 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:42.906321 | orchestrator | 2025-05-06 00:41:42.908557 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-06 00:41:43.032506 | orchestrator | Tuesday 06 May 2025 00:41:42 +0000 (0:00:00.307) 0:00:09.141 *********** 2025-05-06 00:41:43.033371 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:43.034178 | orchestrator | 2025-05-06 00:41:43.182816 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-06 00:41:43.182923 | orchestrator | Tuesday 06 May 2025 00:41:43 +0000 (0:00:00.124) 0:00:09.266 *********** 2025-05-06 00:41:43.182957 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:41:43.183052 | orchestrator | 2025-05-06 00:41:43.183080 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-06 00:41:43.183170 | orchestrator | Tuesday 06 May 2025 00:41:43 +0000 (0:00:00.151) 0:00:09.418 *********** 2025-05-06 00:41:43.376617 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '83550523-1175-5b11-b232-63a45b36e32a'}}) 2025-05-06 00:41:43.376759 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2fbee355-69b3-5569-a73a-eae1d5356d34'}}) 2025-05-06 00:41:43.376782 | orchestrator | 2025-05-06 00:41:43.380331 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-06 00:41:43.380445 | orchestrator | Tuesday 06 May 2025 00:41:43 +0000 (0:00:00.195) 0:00:09.614 *********** 2025-05-06 00:41:43.547694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '83550523-1175-5b11-b232-63a45b36e32a'}})  2025-05-06 00:41:43.548792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2fbee355-69b3-5569-a73a-eae1d5356d34'}})  2025-05-06 00:41:43.548836 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:43.548852 | orchestrator | 2025-05-06 00:41:43.548874 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-06 00:41:43.548964 | orchestrator | Tuesday 06 May 2025 00:41:43 +0000 (0:00:00.169) 0:00:09.783 *********** 2025-05-06 00:41:43.731670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '83550523-1175-5b11-b232-63a45b36e32a'}})  2025-05-06 00:41:43.731908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2fbee355-69b3-5569-a73a-eae1d5356d34'}})  2025-05-06 00:41:43.733087 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:43.733309 | orchestrator | 2025-05-06 00:41:43.733574 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-06 00:41:43.733843 | orchestrator | Tuesday 06 May 2025 00:41:43 +0000 (0:00:00.183) 0:00:09.966 *********** 2025-05-06 00:41:43.881968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '83550523-1175-5b11-b232-63a45b36e32a'}})  2025-05-06 00:41:43.883290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2fbee355-69b3-5569-a73a-eae1d5356d34'}})  2025-05-06 00:41:43.884837 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:44.018499 | orchestrator | 2025-05-06 00:41:44.018632 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-06 00:41:44.018655 | orchestrator | Tuesday 06 May 2025 00:41:43 +0000 (0:00:00.147) 0:00:10.114 *********** 2025-05-06 00:41:44.018687 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:41:44.018783 | orchestrator | 2025-05-06 00:41:44.018810 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-06 00:41:44.204048 | orchestrator | Tuesday 06 May 2025 00:41:44 +0000 (0:00:00.139) 0:00:10.254 *********** 2025-05-06 00:41:44.204254 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:41:44.204398 | orchestrator | 2025-05-06 00:41:44.204979 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-06 00:41:44.205010 | orchestrator | Tuesday 06 May 2025 00:41:44 +0000 (0:00:00.187) 0:00:10.441 *********** 2025-05-06 00:41:44.335737 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:44.339094 | orchestrator | 2025-05-06 00:41:44.339577 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-06 00:41:44.339617 | orchestrator | Tuesday 06 May 2025 00:41:44 +0000 (0:00:00.131) 0:00:10.573 *********** 2025-05-06 00:41:44.471670 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:44.471954 | orchestrator | 2025-05-06 00:41:44.472230 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-06 00:41:44.472268 | orchestrator | Tuesday 06 May 2025 00:41:44 +0000 (0:00:00.134) 0:00:10.708 *********** 2025-05-06 00:41:44.612473 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:44.613466 | orchestrator | 2025-05-06 00:41:44.613518 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-06 00:41:44.614673 | orchestrator | Tuesday 06 May 2025 00:41:44 +0000 (0:00:00.139) 0:00:10.848 *********** 2025-05-06 00:41:45.005030 | orchestrator | ok: [testbed-node-3] => { 2025-05-06 00:41:45.006281 | orchestrator |  "ceph_osd_devices": { 2025-05-06 00:41:45.007523 | orchestrator |  "sdb": { 2025-05-06 00:41:45.008366 | orchestrator |  "osd_lvm_uuid": "83550523-1175-5b11-b232-63a45b36e32a" 2025-05-06 00:41:45.009348 | orchestrator |  }, 2025-05-06 00:41:45.009881 | orchestrator |  "sdc": { 2025-05-06 00:41:45.010482 | orchestrator |  "osd_lvm_uuid": "2fbee355-69b3-5569-a73a-eae1d5356d34" 2025-05-06 00:41:45.012247 | orchestrator |  } 2025-05-06 00:41:45.012358 | orchestrator |  } 2025-05-06 00:41:45.012384 | orchestrator | } 2025-05-06 00:41:45.013033 | orchestrator | 2025-05-06 00:41:45.013554 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-06 00:41:45.014089 | orchestrator | Tuesday 06 May 2025 00:41:44 +0000 (0:00:00.393) 0:00:11.242 *********** 2025-05-06 00:41:45.131189 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:45.132195 | orchestrator | 2025-05-06 00:41:45.140029 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-06 00:41:45.140261 | orchestrator | Tuesday 06 May 2025 00:41:45 +0000 (0:00:00.125) 0:00:11.368 *********** 2025-05-06 00:41:45.261624 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:45.262838 | orchestrator | 2025-05-06 00:41:45.264045 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-06 00:41:45.265002 | orchestrator | Tuesday 06 May 2025 00:41:45 +0000 (0:00:00.130) 0:00:11.498 *********** 2025-05-06 00:41:45.395890 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:41:45.396220 | orchestrator | 2025-05-06 00:41:45.396264 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-06 00:41:45.396667 | orchestrator | Tuesday 06 May 2025 00:41:45 +0000 (0:00:00.134) 0:00:11.633 *********** 2025-05-06 00:41:45.666339 | orchestrator | changed: [testbed-node-3] => { 2025-05-06 00:41:45.666858 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-06 00:41:45.667221 | orchestrator |  "ceph_osd_devices": { 2025-05-06 00:41:45.669701 | orchestrator |  "sdb": { 2025-05-06 00:41:45.670173 | orchestrator |  "osd_lvm_uuid": "83550523-1175-5b11-b232-63a45b36e32a" 2025-05-06 00:41:45.672090 | orchestrator |  }, 2025-05-06 00:41:45.672190 | orchestrator |  "sdc": { 2025-05-06 00:41:45.674757 | orchestrator |  "osd_lvm_uuid": "2fbee355-69b3-5569-a73a-eae1d5356d34" 2025-05-06 00:41:45.674887 | orchestrator |  } 2025-05-06 00:41:45.675327 | orchestrator |  }, 2025-05-06 00:41:45.676220 | orchestrator |  "lvm_volumes": [ 2025-05-06 00:41:45.676512 | orchestrator |  { 2025-05-06 00:41:45.677081 | orchestrator |  "data": "osd-block-83550523-1175-5b11-b232-63a45b36e32a", 2025-05-06 00:41:45.677632 | orchestrator |  "data_vg": "ceph-83550523-1175-5b11-b232-63a45b36e32a" 2025-05-06 00:41:45.677987 | orchestrator |  }, 2025-05-06 00:41:45.678504 | orchestrator |  { 2025-05-06 00:41:45.678975 | orchestrator |  "data": "osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34", 2025-05-06 00:41:45.679651 | orchestrator |  "data_vg": "ceph-2fbee355-69b3-5569-a73a-eae1d5356d34" 2025-05-06 00:41:45.680059 | orchestrator |  } 2025-05-06 00:41:45.681434 | orchestrator |  ] 2025-05-06 00:41:45.681728 | orchestrator |  } 2025-05-06 00:41:45.682679 | orchestrator | } 2025-05-06 00:41:45.683606 | orchestrator | 2025-05-06 00:41:45.684141 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-06 00:41:45.684789 | orchestrator | Tuesday 06 May 2025 00:41:45 +0000 (0:00:00.265) 0:00:11.899 *********** 2025-05-06 00:41:48.149062 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-06 00:41:48.150178 | orchestrator | 2025-05-06 00:41:48.153840 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-06 00:41:48.154071 | orchestrator | 2025-05-06 00:41:48.154140 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-06 00:41:48.155001 | orchestrator | Tuesday 06 May 2025 00:41:48 +0000 (0:00:02.481) 0:00:14.381 *********** 2025-05-06 00:41:48.481220 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-06 00:41:48.487380 | orchestrator | 2025-05-06 00:41:48.489368 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-06 00:41:48.774754 | orchestrator | Tuesday 06 May 2025 00:41:48 +0000 (0:00:00.324) 0:00:14.705 *********** 2025-05-06 00:41:48.774916 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:41:48.776218 | orchestrator | 2025-05-06 00:41:48.776979 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:48.777468 | orchestrator | Tuesday 06 May 2025 00:41:48 +0000 (0:00:00.307) 0:00:15.012 *********** 2025-05-06 00:41:49.215706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-06 00:41:49.219152 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-06 00:41:49.220528 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-06 00:41:49.222659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-06 00:41:49.226716 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-06 00:41:49.228808 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-06 00:41:49.229987 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-06 00:41:49.232447 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-06 00:41:49.233291 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-06 00:41:49.234122 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-06 00:41:49.237197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-06 00:41:49.238817 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-06 00:41:49.241015 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-06 00:41:49.241296 | orchestrator | 2025-05-06 00:41:49.241337 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:49.242070 | orchestrator | Tuesday 06 May 2025 00:41:49 +0000 (0:00:00.435) 0:00:15.447 *********** 2025-05-06 00:41:49.411469 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:49.412852 | orchestrator | 2025-05-06 00:41:49.413791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:49.414864 | orchestrator | Tuesday 06 May 2025 00:41:49 +0000 (0:00:00.199) 0:00:15.647 *********** 2025-05-06 00:41:49.711738 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:49.712820 | orchestrator | 2025-05-06 00:41:49.713828 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:49.715115 | orchestrator | Tuesday 06 May 2025 00:41:49 +0000 (0:00:00.296) 0:00:15.944 *********** 2025-05-06 00:41:50.013896 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:50.016708 | orchestrator | 2025-05-06 00:41:50.019221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:50.019601 | orchestrator | Tuesday 06 May 2025 00:41:50 +0000 (0:00:00.303) 0:00:16.247 *********** 2025-05-06 00:41:50.219849 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:51.044337 | orchestrator | 2025-05-06 00:41:51.044453 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:51.044470 | orchestrator | Tuesday 06 May 2025 00:41:50 +0000 (0:00:00.204) 0:00:16.451 *********** 2025-05-06 00:41:51.044496 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:51.295135 | orchestrator | 2025-05-06 00:41:51.295260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:51.295293 | orchestrator | Tuesday 06 May 2025 00:41:51 +0000 (0:00:00.821) 0:00:17.273 *********** 2025-05-06 00:41:51.295327 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:51.295403 | orchestrator | 2025-05-06 00:41:51.296283 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:51.297225 | orchestrator | Tuesday 06 May 2025 00:41:51 +0000 (0:00:00.254) 0:00:17.528 *********** 2025-05-06 00:41:51.626360 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:51.626549 | orchestrator | 2025-05-06 00:41:51.626625 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:51.627244 | orchestrator | Tuesday 06 May 2025 00:41:51 +0000 (0:00:00.327) 0:00:17.855 *********** 2025-05-06 00:41:51.975277 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:51.975468 | orchestrator | 2025-05-06 00:41:51.975493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:51.975518 | orchestrator | Tuesday 06 May 2025 00:41:51 +0000 (0:00:00.336) 0:00:18.191 *********** 2025-05-06 00:41:52.508825 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8) 2025-05-06 00:41:52.509050 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8) 2025-05-06 00:41:52.511571 | orchestrator | 2025-05-06 00:41:52.511915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:52.511986 | orchestrator | Tuesday 06 May 2025 00:41:52 +0000 (0:00:00.549) 0:00:18.741 *********** 2025-05-06 00:41:52.959296 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c3e2c64f-9688-4cad-bb81-b3a7d150bd8b) 2025-05-06 00:41:52.961009 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c3e2c64f-9688-4cad-bb81-b3a7d150bd8b) 2025-05-06 00:41:52.962070 | orchestrator | 2025-05-06 00:41:52.965031 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:52.966871 | orchestrator | Tuesday 06 May 2025 00:41:52 +0000 (0:00:00.454) 0:00:19.195 *********** 2025-05-06 00:41:53.520307 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bc0c56a8-1377-4a36-857b-86c78b746055) 2025-05-06 00:41:53.520823 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bc0c56a8-1377-4a36-857b-86c78b746055) 2025-05-06 00:41:53.521025 | orchestrator | 2025-05-06 00:41:53.521503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:53.522161 | orchestrator | Tuesday 06 May 2025 00:41:53 +0000 (0:00:00.561) 0:00:19.757 *********** 2025-05-06 00:41:54.003193 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_eefa0fb1-6e32-4be6-9371-3c36667f9eb4) 2025-05-06 00:41:54.004563 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_eefa0fb1-6e32-4be6-9371-3c36667f9eb4) 2025-05-06 00:41:54.004638 | orchestrator | 2025-05-06 00:41:54.004659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:41:54.004679 | orchestrator | Tuesday 06 May 2025 00:41:53 +0000 (0:00:00.480) 0:00:20.237 *********** 2025-05-06 00:41:54.284460 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-06 00:41:54.285568 | orchestrator | 2025-05-06 00:41:54.285648 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:54.286166 | orchestrator | Tuesday 06 May 2025 00:41:54 +0000 (0:00:00.284) 0:00:20.522 *********** 2025-05-06 00:41:55.152317 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-06 00:41:55.153447 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-06 00:41:55.154700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-06 00:41:55.161844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-06 00:41:55.162710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-06 00:41:55.164914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-06 00:41:55.165529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-06 00:41:55.165976 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-06 00:41:55.166927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-06 00:41:55.168067 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-06 00:41:55.169307 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-06 00:41:55.170275 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-06 00:41:55.171739 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-06 00:41:55.172642 | orchestrator | 2025-05-06 00:41:55.173608 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:55.174354 | orchestrator | Tuesday 06 May 2025 00:41:55 +0000 (0:00:00.861) 0:00:21.384 *********** 2025-05-06 00:41:55.419906 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:55.420331 | orchestrator | 2025-05-06 00:41:55.421300 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:55.421667 | orchestrator | Tuesday 06 May 2025 00:41:55 +0000 (0:00:00.272) 0:00:21.657 *********** 2025-05-06 00:41:55.617319 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:55.617486 | orchestrator | 2025-05-06 00:41:55.617507 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:55.617527 | orchestrator | Tuesday 06 May 2025 00:41:55 +0000 (0:00:00.195) 0:00:21.852 *********** 2025-05-06 00:41:55.784835 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:55.785217 | orchestrator | 2025-05-06 00:41:55.786069 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:55.786504 | orchestrator | Tuesday 06 May 2025 00:41:55 +0000 (0:00:00.168) 0:00:22.021 *********** 2025-05-06 00:41:55.988436 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:55.988588 | orchestrator | 2025-05-06 00:41:55.990134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:56.180651 | orchestrator | Tuesday 06 May 2025 00:41:55 +0000 (0:00:00.195) 0:00:22.216 *********** 2025-05-06 00:41:56.180733 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:56.181697 | orchestrator | 2025-05-06 00:41:56.182865 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:56.183109 | orchestrator | Tuesday 06 May 2025 00:41:56 +0000 (0:00:00.202) 0:00:22.418 *********** 2025-05-06 00:41:56.367750 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:56.368045 | orchestrator | 2025-05-06 00:41:56.368119 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:56.368418 | orchestrator | Tuesday 06 May 2025 00:41:56 +0000 (0:00:00.182) 0:00:22.601 *********** 2025-05-06 00:41:56.557275 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:56.558126 | orchestrator | 2025-05-06 00:41:56.560013 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:56.561473 | orchestrator | Tuesday 06 May 2025 00:41:56 +0000 (0:00:00.193) 0:00:22.795 *********** 2025-05-06 00:41:56.745682 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:57.470513 | orchestrator | 2025-05-06 00:41:57.470644 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:57.470663 | orchestrator | Tuesday 06 May 2025 00:41:56 +0000 (0:00:00.186) 0:00:22.981 *********** 2025-05-06 00:41:57.470692 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-06 00:41:57.472021 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-06 00:41:57.474287 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-06 00:41:57.475418 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-06 00:41:57.475451 | orchestrator | 2025-05-06 00:41:57.476000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:57.476912 | orchestrator | Tuesday 06 May 2025 00:41:57 +0000 (0:00:00.724) 0:00:23.705 *********** 2025-05-06 00:41:57.960008 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:57.960717 | orchestrator | 2025-05-06 00:41:57.960769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:57.961206 | orchestrator | Tuesday 06 May 2025 00:41:57 +0000 (0:00:00.489) 0:00:24.195 *********** 2025-05-06 00:41:58.166895 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:58.167110 | orchestrator | 2025-05-06 00:41:58.167140 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:58.167163 | orchestrator | Tuesday 06 May 2025 00:41:58 +0000 (0:00:00.204) 0:00:24.399 *********** 2025-05-06 00:41:58.371156 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:58.373999 | orchestrator | 2025-05-06 00:41:58.374041 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:41:58.374252 | orchestrator | Tuesday 06 May 2025 00:41:58 +0000 (0:00:00.206) 0:00:24.605 *********** 2025-05-06 00:41:58.567231 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:58.569163 | orchestrator | 2025-05-06 00:41:58.569216 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-06 00:41:58.569230 | orchestrator | Tuesday 06 May 2025 00:41:58 +0000 (0:00:00.195) 0:00:24.801 *********** 2025-05-06 00:41:58.736838 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-06 00:41:58.740276 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-06 00:41:58.742251 | orchestrator | 2025-05-06 00:41:58.742294 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-06 00:41:58.742319 | orchestrator | Tuesday 06 May 2025 00:41:58 +0000 (0:00:00.169) 0:00:24.970 *********** 2025-05-06 00:41:58.859292 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:58.859802 | orchestrator | 2025-05-06 00:41:58.863259 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-06 00:41:58.864376 | orchestrator | Tuesday 06 May 2025 00:41:58 +0000 (0:00:00.123) 0:00:25.094 *********** 2025-05-06 00:41:59.025691 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:59.027182 | orchestrator | 2025-05-06 00:41:59.031587 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-06 00:41:59.032784 | orchestrator | Tuesday 06 May 2025 00:41:59 +0000 (0:00:00.164) 0:00:25.259 *********** 2025-05-06 00:41:59.179170 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:59.180629 | orchestrator | 2025-05-06 00:41:59.181666 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-06 00:41:59.182493 | orchestrator | Tuesday 06 May 2025 00:41:59 +0000 (0:00:00.148) 0:00:25.408 *********** 2025-05-06 00:41:59.332907 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:41:59.333161 | orchestrator | 2025-05-06 00:41:59.334280 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-06 00:41:59.337590 | orchestrator | Tuesday 06 May 2025 00:41:59 +0000 (0:00:00.158) 0:00:25.567 *********** 2025-05-06 00:41:59.526098 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8a0f4265-dd5d-556c-ac35-a800ef93314e'}}) 2025-05-06 00:41:59.528642 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '108592b4-5156-5470-952e-be389a9738cf'}}) 2025-05-06 00:41:59.529580 | orchestrator | 2025-05-06 00:41:59.531336 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-06 00:41:59.532757 | orchestrator | Tuesday 06 May 2025 00:41:59 +0000 (0:00:00.194) 0:00:25.761 *********** 2025-05-06 00:41:59.741187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8a0f4265-dd5d-556c-ac35-a800ef93314e'}})  2025-05-06 00:41:59.742718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '108592b4-5156-5470-952e-be389a9738cf'}})  2025-05-06 00:41:59.743557 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:41:59.744814 | orchestrator | 2025-05-06 00:41:59.745792 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-06 00:41:59.750227 | orchestrator | Tuesday 06 May 2025 00:41:59 +0000 (0:00:00.216) 0:00:25.977 *********** 2025-05-06 00:42:00.180584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8a0f4265-dd5d-556c-ac35-a800ef93314e'}})  2025-05-06 00:42:00.180818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '108592b4-5156-5470-952e-be389a9738cf'}})  2025-05-06 00:42:00.181724 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:42:00.183520 | orchestrator | 2025-05-06 00:42:00.184446 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-06 00:42:00.185184 | orchestrator | Tuesday 06 May 2025 00:42:00 +0000 (0:00:00.438) 0:00:26.416 *********** 2025-05-06 00:42:00.344135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8a0f4265-dd5d-556c-ac35-a800ef93314e'}})  2025-05-06 00:42:00.345413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '108592b4-5156-5470-952e-be389a9738cf'}})  2025-05-06 00:42:00.346608 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:42:00.347018 | orchestrator | 2025-05-06 00:42:00.350639 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-06 00:42:00.351552 | orchestrator | Tuesday 06 May 2025 00:42:00 +0000 (0:00:00.163) 0:00:26.580 *********** 2025-05-06 00:42:00.489638 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:42:00.490248 | orchestrator | 2025-05-06 00:42:00.491324 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-06 00:42:00.495386 | orchestrator | Tuesday 06 May 2025 00:42:00 +0000 (0:00:00.145) 0:00:26.725 *********** 2025-05-06 00:42:00.645739 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:42:00.647500 | orchestrator | 2025-05-06 00:42:00.648305 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-06 00:42:00.648347 | orchestrator | Tuesday 06 May 2025 00:42:00 +0000 (0:00:00.154) 0:00:26.880 *********** 2025-05-06 00:42:00.786176 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:42:00.937808 | orchestrator | 2025-05-06 00:42:00.937928 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-06 00:42:00.937946 | orchestrator | Tuesday 06 May 2025 00:42:00 +0000 (0:00:00.139) 0:00:27.020 *********** 2025-05-06 00:42:00.937977 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:42:00.938156 | orchestrator | 2025-05-06 00:42:00.941059 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-06 00:42:00.942721 | orchestrator | Tuesday 06 May 2025 00:42:00 +0000 (0:00:00.151) 0:00:27.172 *********** 2025-05-06 00:42:01.072830 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:42:01.073287 | orchestrator | 2025-05-06 00:42:01.074313 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-06 00:42:01.075434 | orchestrator | Tuesday 06 May 2025 00:42:01 +0000 (0:00:00.137) 0:00:27.309 *********** 2025-05-06 00:42:01.241490 | orchestrator | ok: [testbed-node-4] => { 2025-05-06 00:42:01.242711 | orchestrator |  "ceph_osd_devices": { 2025-05-06 00:42:01.242776 | orchestrator |  "sdb": { 2025-05-06 00:42:01.243915 | orchestrator |  "osd_lvm_uuid": "8a0f4265-dd5d-556c-ac35-a800ef93314e" 2025-05-06 00:42:01.245202 | orchestrator |  }, 2025-05-06 00:42:01.246361 | orchestrator |  "sdc": { 2025-05-06 00:42:01.247257 | orchestrator |  "osd_lvm_uuid": "108592b4-5156-5470-952e-be389a9738cf" 2025-05-06 00:42:01.248583 | orchestrator |  } 2025-05-06 00:42:01.249568 | orchestrator |  } 2025-05-06 00:42:01.249945 | orchestrator | } 2025-05-06 00:42:01.251261 | orchestrator | 2025-05-06 00:42:01.252746 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-06 00:42:01.254214 | orchestrator | Tuesday 06 May 2025 00:42:01 +0000 (0:00:00.166) 0:00:27.475 *********** 2025-05-06 00:42:01.405604 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:42:01.406159 | orchestrator | 2025-05-06 00:42:01.406390 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-06 00:42:01.406428 | orchestrator | Tuesday 06 May 2025 00:42:01 +0000 (0:00:00.166) 0:00:27.642 *********** 2025-05-06 00:42:01.547875 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:42:01.548694 | orchestrator | 2025-05-06 00:42:01.549627 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-06 00:42:01.550334 | orchestrator | Tuesday 06 May 2025 00:42:01 +0000 (0:00:00.141) 0:00:27.784 *********** 2025-05-06 00:42:01.686530 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:42:01.687262 | orchestrator | 2025-05-06 00:42:01.687315 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-06 00:42:01.687951 | orchestrator | Tuesday 06 May 2025 00:42:01 +0000 (0:00:00.138) 0:00:27.922 *********** 2025-05-06 00:42:02.210680 | orchestrator | changed: [testbed-node-4] => { 2025-05-06 00:42:02.210895 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-06 00:42:02.211214 | orchestrator |  "ceph_osd_devices": { 2025-05-06 00:42:02.212155 | orchestrator |  "sdb": { 2025-05-06 00:42:02.212870 | orchestrator |  "osd_lvm_uuid": "8a0f4265-dd5d-556c-ac35-a800ef93314e" 2025-05-06 00:42:02.213903 | orchestrator |  }, 2025-05-06 00:42:02.214556 | orchestrator |  "sdc": { 2025-05-06 00:42:02.215286 | orchestrator |  "osd_lvm_uuid": "108592b4-5156-5470-952e-be389a9738cf" 2025-05-06 00:42:02.215799 | orchestrator |  } 2025-05-06 00:42:02.216165 | orchestrator |  }, 2025-05-06 00:42:02.216647 | orchestrator |  "lvm_volumes": [ 2025-05-06 00:42:02.217337 | orchestrator |  { 2025-05-06 00:42:02.217652 | orchestrator |  "data": "osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e", 2025-05-06 00:42:02.218104 | orchestrator |  "data_vg": "ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e" 2025-05-06 00:42:02.218593 | orchestrator |  }, 2025-05-06 00:42:02.219199 | orchestrator |  { 2025-05-06 00:42:02.219301 | orchestrator |  "data": "osd-block-108592b4-5156-5470-952e-be389a9738cf", 2025-05-06 00:42:02.219759 | orchestrator |  "data_vg": "ceph-108592b4-5156-5470-952e-be389a9738cf" 2025-05-06 00:42:02.223894 | orchestrator |  } 2025-05-06 00:42:02.224139 | orchestrator |  ] 2025-05-06 00:42:02.224622 | orchestrator |  } 2025-05-06 00:42:02.224918 | orchestrator | } 2025-05-06 00:42:02.225251 | orchestrator | 2025-05-06 00:42:02.225583 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-06 00:42:02.225952 | orchestrator | Tuesday 06 May 2025 00:42:02 +0000 (0:00:00.519) 0:00:28.442 *********** 2025-05-06 00:42:03.681011 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-06 00:42:03.899778 | orchestrator | 2025-05-06 00:42:03.917256 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-06 00:42:03.917350 | orchestrator | 2025-05-06 00:42:03.917368 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-06 00:42:03.917383 | orchestrator | Tuesday 06 May 2025 00:42:03 +0000 (0:00:01.474) 0:00:29.916 *********** 2025-05-06 00:42:03.917412 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-06 00:42:03.918132 | orchestrator | 2025-05-06 00:42:03.919256 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-06 00:42:03.920284 | orchestrator | Tuesday 06 May 2025 00:42:03 +0000 (0:00:00.237) 0:00:30.154 *********** 2025-05-06 00:42:04.182443 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:42:04.182852 | orchestrator | 2025-05-06 00:42:04.184451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:04.186619 | orchestrator | Tuesday 06 May 2025 00:42:04 +0000 (0:00:00.264) 0:00:30.418 *********** 2025-05-06 00:42:04.912944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-06 00:42:04.914150 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-06 00:42:04.914229 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-06 00:42:04.914266 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-06 00:42:04.917197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-06 00:42:04.917762 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-06 00:42:04.918507 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-06 00:42:04.918939 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-06 00:42:04.919635 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-06 00:42:04.920094 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-06 00:42:04.920670 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-06 00:42:04.921172 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-06 00:42:04.921594 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-06 00:42:04.923245 | orchestrator | 2025-05-06 00:42:04.923633 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:04.923680 | orchestrator | Tuesday 06 May 2025 00:42:04 +0000 (0:00:00.729) 0:00:31.148 *********** 2025-05-06 00:42:05.130747 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:05.131686 | orchestrator | 2025-05-06 00:42:05.134400 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:05.326938 | orchestrator | Tuesday 06 May 2025 00:42:05 +0000 (0:00:00.217) 0:00:31.365 *********** 2025-05-06 00:42:05.327151 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:05.327654 | orchestrator | 2025-05-06 00:42:05.327785 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:05.328377 | orchestrator | Tuesday 06 May 2025 00:42:05 +0000 (0:00:00.197) 0:00:31.562 *********** 2025-05-06 00:42:05.522635 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:05.526302 | orchestrator | 2025-05-06 00:42:05.527129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:05.527177 | orchestrator | Tuesday 06 May 2025 00:42:05 +0000 (0:00:00.195) 0:00:31.758 *********** 2025-05-06 00:42:05.732128 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:05.732913 | orchestrator | 2025-05-06 00:42:05.734161 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:05.734709 | orchestrator | Tuesday 06 May 2025 00:42:05 +0000 (0:00:00.210) 0:00:31.968 *********** 2025-05-06 00:42:05.926449 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:05.927211 | orchestrator | 2025-05-06 00:42:05.928314 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:05.930301 | orchestrator | Tuesday 06 May 2025 00:42:05 +0000 (0:00:00.194) 0:00:32.162 *********** 2025-05-06 00:42:06.135484 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:06.136645 | orchestrator | 2025-05-06 00:42:06.137429 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:06.139451 | orchestrator | Tuesday 06 May 2025 00:42:06 +0000 (0:00:00.206) 0:00:32.369 *********** 2025-05-06 00:42:06.373827 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:06.374578 | orchestrator | 2025-05-06 00:42:06.374843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:06.376194 | orchestrator | Tuesday 06 May 2025 00:42:06 +0000 (0:00:00.240) 0:00:32.610 *********** 2025-05-06 00:42:06.572867 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:06.573044 | orchestrator | 2025-05-06 00:42:06.574499 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:06.574896 | orchestrator | Tuesday 06 May 2025 00:42:06 +0000 (0:00:00.198) 0:00:32.809 *********** 2025-05-06 00:42:07.523362 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247) 2025-05-06 00:42:07.523627 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247) 2025-05-06 00:42:07.527248 | orchestrator | 2025-05-06 00:42:07.529087 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:07.529379 | orchestrator | Tuesday 06 May 2025 00:42:07 +0000 (0:00:00.949) 0:00:33.758 *********** 2025-05-06 00:42:07.959792 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9f4cae81-5600-43ad-ae81-4d2d3f64aa06) 2025-05-06 00:42:07.960622 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9f4cae81-5600-43ad-ae81-4d2d3f64aa06) 2025-05-06 00:42:07.961785 | orchestrator | 2025-05-06 00:42:07.963008 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:07.964224 | orchestrator | Tuesday 06 May 2025 00:42:07 +0000 (0:00:00.437) 0:00:34.196 *********** 2025-05-06 00:42:08.411677 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a5a4c6fa-807d-44c7-a556-c4522912d679) 2025-05-06 00:42:08.856563 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a5a4c6fa-807d-44c7-a556-c4522912d679) 2025-05-06 00:42:08.857546 | orchestrator | 2025-05-06 00:42:08.857612 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:08.857637 | orchestrator | Tuesday 06 May 2025 00:42:08 +0000 (0:00:00.436) 0:00:34.633 *********** 2025-05-06 00:42:08.857686 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f2e4c6c8-e338-4410-96b4-d1d5dab5be16) 2025-05-06 00:42:08.857835 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f2e4c6c8-e338-4410-96b4-d1d5dab5be16) 2025-05-06 00:42:08.858280 | orchestrator | 2025-05-06 00:42:08.859315 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:08.860161 | orchestrator | Tuesday 06 May 2025 00:42:08 +0000 (0:00:00.459) 0:00:35.093 *********** 2025-05-06 00:42:09.196282 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-06 00:42:09.199168 | orchestrator | 2025-05-06 00:42:09.199223 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:42:09.199284 | orchestrator | Tuesday 06 May 2025 00:42:09 +0000 (0:00:00.337) 0:00:35.430 *********** 2025-05-06 00:42:09.597329 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-06 00:42:09.597523 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-06 00:42:09.597974 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-06 00:42:09.598012 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-06 00:42:09.598139 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-06 00:42:09.598821 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-06 00:42:09.599363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-06 00:42:09.600223 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-06 00:42:09.600260 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-06 00:42:09.600771 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-06 00:42:09.601798 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-06 00:42:09.602009 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-06 00:42:09.602560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-06 00:42:09.603139 | orchestrator | 2025-05-06 00:42:09.603431 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:42:09.603466 | orchestrator | Tuesday 06 May 2025 00:42:09 +0000 (0:00:00.403) 0:00:35.834 *********** 2025-05-06 00:42:09.815464 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:09.815641 | orchestrator | 2025-05-06 00:42:09.816715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:42:09.818999 | orchestrator | Tuesday 06 May 2025 00:42:09 +0000 (0:00:00.216) 0:00:36.051 *********** 2025-05-06 00:42:10.002638 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:10.003216 | orchestrator | 2025-05-06 00:42:10.004209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:42:10.004579 | orchestrator | Tuesday 06 May 2025 00:42:09 +0000 (0:00:00.186) 0:00:36.238 *********** 2025-05-06 00:42:10.226002 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:10.226300 | orchestrator | 2025-05-06 00:42:10.226327 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:42:10.226350 | orchestrator | Tuesday 06 May 2025 00:42:10 +0000 (0:00:00.222) 0:00:36.460 *********** 2025-05-06 00:42:10.429433 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:10.430729 | orchestrator | 2025-05-06 00:42:10.430770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:42:11.002175 | orchestrator | Tuesday 06 May 2025 00:42:10 +0000 (0:00:00.204) 0:00:36.665 *********** 2025-05-06 00:42:11.002430 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:11.002530 | orchestrator | 2025-05-06 00:42:11.003504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:42:11.005502 | orchestrator | Tuesday 06 May 2025 00:42:10 +0000 (0:00:00.573) 0:00:37.238 *********** 2025-05-06 00:42:11.206878 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:11.207543 | orchestrator | 2025-05-06 00:42:11.207609 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:42:11.208260 | orchestrator | Tuesday 06 May 2025 00:42:11 +0000 (0:00:00.196) 0:00:37.435 *********** 2025-05-06 00:42:11.409699 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:11.410439 | orchestrator | 2025-05-06 00:42:11.411538 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:42:11.412325 | orchestrator | Tuesday 06 May 2025 00:42:11 +0000 (0:00:00.210) 0:00:37.646 *********** 2025-05-06 00:42:11.600192 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:11.601462 | orchestrator | 2025-05-06 00:42:11.602498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:42:11.603376 | orchestrator | Tuesday 06 May 2025 00:42:11 +0000 (0:00:00.190) 0:00:37.836 *********** 2025-05-06 00:42:12.211893 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-06 00:42:12.212364 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-06 00:42:12.213439 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-06 00:42:12.214903 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-06 00:42:12.217573 | orchestrator | 2025-05-06 00:42:12.419937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:42:12.420054 | orchestrator | Tuesday 06 May 2025 00:42:12 +0000 (0:00:00.610) 0:00:38.447 *********** 2025-05-06 00:42:12.420126 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:12.420889 | orchestrator | 2025-05-06 00:42:12.422176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:42:12.422925 | orchestrator | Tuesday 06 May 2025 00:42:12 +0000 (0:00:00.208) 0:00:38.655 *********** 2025-05-06 00:42:12.626933 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:12.627589 | orchestrator | 2025-05-06 00:42:12.628767 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:42:12.629591 | orchestrator | Tuesday 06 May 2025 00:42:12 +0000 (0:00:00.207) 0:00:38.862 *********** 2025-05-06 00:42:12.849526 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:12.850555 | orchestrator | 2025-05-06 00:42:12.851755 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:42:12.855031 | orchestrator | Tuesday 06 May 2025 00:42:12 +0000 (0:00:00.222) 0:00:39.085 *********** 2025-05-06 00:42:13.059381 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:13.059821 | orchestrator | 2025-05-06 00:42:13.060255 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-06 00:42:13.061192 | orchestrator | Tuesday 06 May 2025 00:42:13 +0000 (0:00:00.211) 0:00:39.296 *********** 2025-05-06 00:42:13.237188 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-06 00:42:13.238197 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-06 00:42:13.238596 | orchestrator | 2025-05-06 00:42:13.239485 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-06 00:42:13.240141 | orchestrator | Tuesday 06 May 2025 00:42:13 +0000 (0:00:00.175) 0:00:39.471 *********** 2025-05-06 00:42:13.529654 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:13.530842 | orchestrator | 2025-05-06 00:42:13.531097 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-06 00:42:13.532456 | orchestrator | Tuesday 06 May 2025 00:42:13 +0000 (0:00:00.294) 0:00:39.766 *********** 2025-05-06 00:42:13.668928 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:13.669415 | orchestrator | 2025-05-06 00:42:13.670700 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-06 00:42:13.671460 | orchestrator | Tuesday 06 May 2025 00:42:13 +0000 (0:00:00.139) 0:00:39.905 *********** 2025-05-06 00:42:13.802151 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:13.802367 | orchestrator | 2025-05-06 00:42:13.803975 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-06 00:42:13.804905 | orchestrator | Tuesday 06 May 2025 00:42:13 +0000 (0:00:00.132) 0:00:40.037 *********** 2025-05-06 00:42:13.947302 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:42:13.948172 | orchestrator | 2025-05-06 00:42:13.949364 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-06 00:42:13.950312 | orchestrator | Tuesday 06 May 2025 00:42:13 +0000 (0:00:00.146) 0:00:40.184 *********** 2025-05-06 00:42:14.135800 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5100a9d2-ae69-5e7a-989d-a5d69986fee9'}}) 2025-05-06 00:42:14.136496 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '376b0c1a-f7d0-50df-9bf6-f05e021d85c5'}}) 2025-05-06 00:42:14.137320 | orchestrator | 2025-05-06 00:42:14.138252 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-06 00:42:14.139271 | orchestrator | Tuesday 06 May 2025 00:42:14 +0000 (0:00:00.188) 0:00:40.372 *********** 2025-05-06 00:42:14.312693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5100a9d2-ae69-5e7a-989d-a5d69986fee9'}})  2025-05-06 00:42:14.312884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '376b0c1a-f7d0-50df-9bf6-f05e021d85c5'}})  2025-05-06 00:42:14.313750 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:14.314566 | orchestrator | 2025-05-06 00:42:14.315260 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-06 00:42:14.317570 | orchestrator | Tuesday 06 May 2025 00:42:14 +0000 (0:00:00.176) 0:00:40.548 *********** 2025-05-06 00:42:14.473407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5100a9d2-ae69-5e7a-989d-a5d69986fee9'}})  2025-05-06 00:42:14.474393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '376b0c1a-f7d0-50df-9bf6-f05e021d85c5'}})  2025-05-06 00:42:14.474491 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:14.475138 | orchestrator | 2025-05-06 00:42:14.476099 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-06 00:42:14.476793 | orchestrator | Tuesday 06 May 2025 00:42:14 +0000 (0:00:00.160) 0:00:40.709 *********** 2025-05-06 00:42:14.654628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5100a9d2-ae69-5e7a-989d-a5d69986fee9'}})  2025-05-06 00:42:14.654958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '376b0c1a-f7d0-50df-9bf6-f05e021d85c5'}})  2025-05-06 00:42:14.655690 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:14.656653 | orchestrator | 2025-05-06 00:42:14.657160 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-06 00:42:14.657633 | orchestrator | Tuesday 06 May 2025 00:42:14 +0000 (0:00:00.182) 0:00:40.891 *********** 2025-05-06 00:42:14.799627 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:42:14.799963 | orchestrator | 2025-05-06 00:42:14.802176 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-06 00:42:14.803172 | orchestrator | Tuesday 06 May 2025 00:42:14 +0000 (0:00:00.144) 0:00:41.036 *********** 2025-05-06 00:42:14.945610 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:42:14.950180 | orchestrator | 2025-05-06 00:42:14.950225 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-06 00:42:14.950248 | orchestrator | Tuesday 06 May 2025 00:42:14 +0000 (0:00:00.145) 0:00:41.181 *********** 2025-05-06 00:42:15.085672 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:15.086843 | orchestrator | 2025-05-06 00:42:15.086908 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-06 00:42:15.087787 | orchestrator | Tuesday 06 May 2025 00:42:15 +0000 (0:00:00.138) 0:00:41.320 *********** 2025-05-06 00:42:15.229631 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:15.230706 | orchestrator | 2025-05-06 00:42:15.231720 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-06 00:42:15.233030 | orchestrator | Tuesday 06 May 2025 00:42:15 +0000 (0:00:00.144) 0:00:41.465 *********** 2025-05-06 00:42:15.565595 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:15.566231 | orchestrator | 2025-05-06 00:42:15.567404 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-06 00:42:15.570394 | orchestrator | Tuesday 06 May 2025 00:42:15 +0000 (0:00:00.336) 0:00:41.801 *********** 2025-05-06 00:42:15.712842 | orchestrator | ok: [testbed-node-5] => { 2025-05-06 00:42:15.713372 | orchestrator |  "ceph_osd_devices": { 2025-05-06 00:42:15.713423 | orchestrator |  "sdb": { 2025-05-06 00:42:15.715103 | orchestrator |  "osd_lvm_uuid": "5100a9d2-ae69-5e7a-989d-a5d69986fee9" 2025-05-06 00:42:15.715993 | orchestrator |  }, 2025-05-06 00:42:15.716937 | orchestrator |  "sdc": { 2025-05-06 00:42:15.717702 | orchestrator |  "osd_lvm_uuid": "376b0c1a-f7d0-50df-9bf6-f05e021d85c5" 2025-05-06 00:42:15.718509 | orchestrator |  } 2025-05-06 00:42:15.719050 | orchestrator |  } 2025-05-06 00:42:15.719454 | orchestrator | } 2025-05-06 00:42:15.720038 | orchestrator | 2025-05-06 00:42:15.720631 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-06 00:42:15.721201 | orchestrator | Tuesday 06 May 2025 00:42:15 +0000 (0:00:00.145) 0:00:41.947 *********** 2025-05-06 00:42:15.855461 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:15.856191 | orchestrator | 2025-05-06 00:42:15.857211 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-06 00:42:15.857631 | orchestrator | Tuesday 06 May 2025 00:42:15 +0000 (0:00:00.141) 0:00:42.088 *********** 2025-05-06 00:42:15.989630 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:15.990112 | orchestrator | 2025-05-06 00:42:15.990158 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-06 00:42:15.990606 | orchestrator | Tuesday 06 May 2025 00:42:15 +0000 (0:00:00.137) 0:00:42.226 *********** 2025-05-06 00:42:16.135659 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:42:16.136350 | orchestrator | 2025-05-06 00:42:16.137237 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-06 00:42:16.137916 | orchestrator | Tuesday 06 May 2025 00:42:16 +0000 (0:00:00.146) 0:00:42.372 *********** 2025-05-06 00:42:16.414437 | orchestrator | changed: [testbed-node-5] => { 2025-05-06 00:42:16.414862 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-06 00:42:16.416363 | orchestrator |  "ceph_osd_devices": { 2025-05-06 00:42:16.418317 | orchestrator |  "sdb": { 2025-05-06 00:42:16.418698 | orchestrator |  "osd_lvm_uuid": "5100a9d2-ae69-5e7a-989d-a5d69986fee9" 2025-05-06 00:42:16.418733 | orchestrator |  }, 2025-05-06 00:42:16.419756 | orchestrator |  "sdc": { 2025-05-06 00:42:16.420734 | orchestrator |  "osd_lvm_uuid": "376b0c1a-f7d0-50df-9bf6-f05e021d85c5" 2025-05-06 00:42:16.421708 | orchestrator |  } 2025-05-06 00:42:16.422265 | orchestrator |  }, 2025-05-06 00:42:16.422942 | orchestrator |  "lvm_volumes": [ 2025-05-06 00:42:16.423628 | orchestrator |  { 2025-05-06 00:42:16.424269 | orchestrator |  "data": "osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9", 2025-05-06 00:42:16.424587 | orchestrator |  "data_vg": "ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9" 2025-05-06 00:42:16.425125 | orchestrator |  }, 2025-05-06 00:42:16.425397 | orchestrator |  { 2025-05-06 00:42:16.425953 | orchestrator |  "data": "osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5", 2025-05-06 00:42:16.426264 | orchestrator |  "data_vg": "ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5" 2025-05-06 00:42:16.426975 | orchestrator |  } 2025-05-06 00:42:16.427267 | orchestrator |  ] 2025-05-06 00:42:16.428324 | orchestrator |  } 2025-05-06 00:42:16.428632 | orchestrator | } 2025-05-06 00:42:16.429497 | orchestrator | 2025-05-06 00:42:16.429689 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-06 00:42:16.430090 | orchestrator | Tuesday 06 May 2025 00:42:16 +0000 (0:00:00.276) 0:00:42.649 *********** 2025-05-06 00:42:17.559284 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-06 00:42:17.560008 | orchestrator | 2025-05-06 00:42:17.561727 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:42:17.562100 | orchestrator | 2025-05-06 00:42:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-06 00:42:17.563170 | orchestrator | 2025-05-06 00:42:17 | INFO  | Please wait and do not abort execution. 2025-05-06 00:42:17.563205 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-06 00:42:17.564432 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-06 00:42:17.564650 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-06 00:42:17.565974 | orchestrator | 2025-05-06 00:42:17.566877 | orchestrator | 2025-05-06 00:42:17.567559 | orchestrator | 2025-05-06 00:42:17.568503 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 00:42:17.569138 | orchestrator | Tuesday 06 May 2025 00:42:17 +0000 (0:00:01.144) 0:00:43.793 *********** 2025-05-06 00:42:17.569928 | orchestrator | =============================================================================== 2025-05-06 00:42:17.570910 | orchestrator | Write configuration file ------------------------------------------------ 5.10s 2025-05-06 00:42:17.571622 | orchestrator | Add known partitions to the list of available block devices ------------- 1.66s 2025-05-06 00:42:17.573087 | orchestrator | Add known links to the list of available block devices ------------------ 1.63s 2025-05-06 00:42:17.574154 | orchestrator | Print configuration data ------------------------------------------------ 1.06s 2025-05-06 00:42:17.574697 | orchestrator | Add known links to the list of available block devices ------------------ 0.95s 2025-05-06 00:42:17.575153 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.86s 2025-05-06 00:42:17.575670 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2025-05-06 00:42:17.576403 | orchestrator | Get initial list of available block devices ----------------------------- 0.82s 2025-05-06 00:42:17.576869 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.78s 2025-05-06 00:42:17.577191 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2025-05-06 00:42:17.577904 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.71s 2025-05-06 00:42:17.578137 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.61s 2025-05-06 00:42:17.578839 | orchestrator | Generate DB VG names ---------------------------------------------------- 0.61s 2025-05-06 00:42:17.579435 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2025-05-06 00:42:17.579850 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.58s 2025-05-06 00:42:17.580486 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2025-05-06 00:42:17.580884 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2025-05-06 00:42:17.581384 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2025-05-06 00:42:17.581914 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.56s 2025-05-06 00:42:17.582190 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2025-05-06 00:42:29.505667 | orchestrator | 2025-05-06 00:42:29 | INFO  | Task a7686175-e184-440e-974a-81329b3acf6c is running in background. Output coming soon. 2025-05-06 00:42:52.343709 | orchestrator | 2025-05-06 00:42:43 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-05-06 00:42:53.929896 | orchestrator | 2025-05-06 00:42:43 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-05-06 00:42:53.930165 | orchestrator | 2025-05-06 00:42:43 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-05-06 00:42:53.930206 | orchestrator | 2025-05-06 00:42:44 | INFO  | Handling group overwrites in 99-overwrite 2025-05-06 00:42:53.930237 | orchestrator | 2025-05-06 00:42:44 | INFO  | Removing group frr:children from 60-generic 2025-05-06 00:42:53.930252 | orchestrator | 2025-05-06 00:42:44 | INFO  | Removing group storage:children from 50-kolla 2025-05-06 00:42:53.930280 | orchestrator | 2025-05-06 00:42:44 | INFO  | Removing group netbird:children from 50-infrastruture 2025-05-06 00:42:53.930296 | orchestrator | 2025-05-06 00:42:44 | INFO  | Removing group ceph-mds from 50-ceph 2025-05-06 00:42:53.930310 | orchestrator | 2025-05-06 00:42:44 | INFO  | Removing group ceph-rgw from 50-ceph 2025-05-06 00:42:53.930325 | orchestrator | 2025-05-06 00:42:44 | INFO  | Handling group overwrites in 20-roles 2025-05-06 00:42:53.930339 | orchestrator | 2025-05-06 00:42:44 | INFO  | Removing group k3s_node from 50-infrastruture 2025-05-06 00:42:53.930353 | orchestrator | 2025-05-06 00:42:44 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-05-06 00:42:53.930367 | orchestrator | 2025-05-06 00:42:52 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-05-06 00:42:53.930402 | orchestrator | 2025-05-06 00:42:53 | INFO  | Task e443d291-a3c4-49f8-8587-31a11db244ed (ceph-create-lvm-devices) was prepared for execution. 2025-05-06 00:42:56.822956 | orchestrator | 2025-05-06 00:42:53 | INFO  | It takes a moment until task e443d291-a3c4-49f8-8587-31a11db244ed (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-06 00:42:56.823203 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-06 00:42:57.259153 | orchestrator | 2025-05-06 00:42:57.259634 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-06 00:42:57.260747 | orchestrator | 2025-05-06 00:42:57.264205 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-06 00:42:57.264813 | orchestrator | Tuesday 06 May 2025 00:42:57 +0000 (0:00:00.379) 0:00:00.379 *********** 2025-05-06 00:42:57.474246 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-06 00:42:57.474396 | orchestrator | 2025-05-06 00:42:57.477106 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-06 00:42:57.676112 | orchestrator | Tuesday 06 May 2025 00:42:57 +0000 (0:00:00.216) 0:00:00.595 *********** 2025-05-06 00:42:57.676287 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:42:57.676394 | orchestrator | 2025-05-06 00:42:57.676698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:57.677345 | orchestrator | Tuesday 06 May 2025 00:42:57 +0000 (0:00:00.201) 0:00:00.797 *********** 2025-05-06 00:42:58.203354 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-06 00:42:58.204530 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-06 00:42:58.206491 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-06 00:42:58.206942 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-06 00:42:58.207341 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-06 00:42:58.208270 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-06 00:42:58.208743 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-06 00:42:58.209829 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-06 00:42:58.210846 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-06 00:42:58.212283 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-06 00:42:58.212536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-06 00:42:58.213334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-06 00:42:58.217023 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-06 00:42:58.217353 | orchestrator | 2025-05-06 00:42:58.217773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:58.218132 | orchestrator | Tuesday 06 May 2025 00:42:58 +0000 (0:00:00.527) 0:00:01.324 *********** 2025-05-06 00:42:58.384983 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:42:58.387367 | orchestrator | 2025-05-06 00:42:58.387463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:58.387495 | orchestrator | Tuesday 06 May 2025 00:42:58 +0000 (0:00:00.181) 0:00:01.506 *********** 2025-05-06 00:42:58.537358 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:42:58.537575 | orchestrator | 2025-05-06 00:42:58.538397 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:58.539109 | orchestrator | Tuesday 06 May 2025 00:42:58 +0000 (0:00:00.153) 0:00:01.659 *********** 2025-05-06 00:42:58.721116 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:42:58.721862 | orchestrator | 2025-05-06 00:42:58.724529 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:58.896853 | orchestrator | Tuesday 06 May 2025 00:42:58 +0000 (0:00:00.183) 0:00:01.842 *********** 2025-05-06 00:42:58.897036 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:42:58.900568 | orchestrator | 2025-05-06 00:42:58.901413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:58.902382 | orchestrator | Tuesday 06 May 2025 00:42:58 +0000 (0:00:00.174) 0:00:02.017 *********** 2025-05-06 00:42:59.073269 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:42:59.075838 | orchestrator | 2025-05-06 00:42:59.075875 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:59.075898 | orchestrator | Tuesday 06 May 2025 00:42:59 +0000 (0:00:00.176) 0:00:02.194 *********** 2025-05-06 00:42:59.241324 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:42:59.241515 | orchestrator | 2025-05-06 00:42:59.242808 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:59.245847 | orchestrator | Tuesday 06 May 2025 00:42:59 +0000 (0:00:00.168) 0:00:02.363 *********** 2025-05-06 00:42:59.426300 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:42:59.426879 | orchestrator | 2025-05-06 00:42:59.426922 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:59.427568 | orchestrator | Tuesday 06 May 2025 00:42:59 +0000 (0:00:00.184) 0:00:02.547 *********** 2025-05-06 00:42:59.599898 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:42:59.600155 | orchestrator | 2025-05-06 00:42:59.603383 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:42:59.604181 | orchestrator | Tuesday 06 May 2025 00:42:59 +0000 (0:00:00.174) 0:00:02.721 *********** 2025-05-06 00:43:00.094583 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0) 2025-05-06 00:43:00.098360 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0) 2025-05-06 00:43:00.099211 | orchestrator | 2025-05-06 00:43:00.099799 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:00.100529 | orchestrator | Tuesday 06 May 2025 00:43:00 +0000 (0:00:00.493) 0:00:03.215 *********** 2025-05-06 00:43:00.686569 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8c0721df-98b6-45a8-8372-f184b99eacbe) 2025-05-06 00:43:00.689463 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8c0721df-98b6-45a8-8372-f184b99eacbe) 2025-05-06 00:43:00.689741 | orchestrator | 2025-05-06 00:43:00.690547 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:00.691176 | orchestrator | Tuesday 06 May 2025 00:43:00 +0000 (0:00:00.590) 0:00:03.806 *********** 2025-05-06 00:43:01.062481 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cc7f276d-c2ba-4b91-9f6b-a505ec6ab98a) 2025-05-06 00:43:01.062640 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cc7f276d-c2ba-4b91-9f6b-a505ec6ab98a) 2025-05-06 00:43:01.063416 | orchestrator | 2025-05-06 00:43:01.064483 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:01.066505 | orchestrator | Tuesday 06 May 2025 00:43:01 +0000 (0:00:00.377) 0:00:04.183 *********** 2025-05-06 00:43:01.458675 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7e976783-2213-433c-91fb-66c729e68827) 2025-05-06 00:43:01.460911 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7e976783-2213-433c-91fb-66c729e68827) 2025-05-06 00:43:01.748048 | orchestrator | 2025-05-06 00:43:01.748195 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:01.748248 | orchestrator | Tuesday 06 May 2025 00:43:01 +0000 (0:00:00.394) 0:00:04.578 *********** 2025-05-06 00:43:01.748280 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-06 00:43:01.748363 | orchestrator | 2025-05-06 00:43:01.748672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:01.748704 | orchestrator | Tuesday 06 May 2025 00:43:01 +0000 (0:00:00.292) 0:00:04.870 *********** 2025-05-06 00:43:02.145775 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-06 00:43:02.146071 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-06 00:43:02.147238 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-06 00:43:02.150866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-06 00:43:02.151324 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-06 00:43:02.152024 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-06 00:43:02.152846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-06 00:43:02.153414 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-06 00:43:02.153942 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-06 00:43:02.154516 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-06 00:43:02.157052 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-06 00:43:02.157455 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-06 00:43:02.158090 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-06 00:43:02.158772 | orchestrator | 2025-05-06 00:43:02.159538 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:02.160050 | orchestrator | Tuesday 06 May 2025 00:43:02 +0000 (0:00:00.397) 0:00:05.267 *********** 2025-05-06 00:43:02.321700 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:02.322307 | orchestrator | 2025-05-06 00:43:02.322772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:02.323363 | orchestrator | Tuesday 06 May 2025 00:43:02 +0000 (0:00:00.174) 0:00:05.442 *********** 2025-05-06 00:43:02.496788 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:02.497480 | orchestrator | 2025-05-06 00:43:02.498280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:02.498849 | orchestrator | Tuesday 06 May 2025 00:43:02 +0000 (0:00:00.173) 0:00:05.615 *********** 2025-05-06 00:43:02.663642 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:02.664621 | orchestrator | 2025-05-06 00:43:02.664664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:02.665565 | orchestrator | Tuesday 06 May 2025 00:43:02 +0000 (0:00:00.168) 0:00:05.784 *********** 2025-05-06 00:43:02.837845 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:02.838591 | orchestrator | 2025-05-06 00:43:02.839396 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:02.839865 | orchestrator | Tuesday 06 May 2025 00:43:02 +0000 (0:00:00.175) 0:00:05.959 *********** 2025-05-06 00:43:03.236702 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:03.236902 | orchestrator | 2025-05-06 00:43:03.237360 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:03.237965 | orchestrator | Tuesday 06 May 2025 00:43:03 +0000 (0:00:00.398) 0:00:06.358 *********** 2025-05-06 00:43:03.415669 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:03.416070 | orchestrator | 2025-05-06 00:43:03.416593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:03.417273 | orchestrator | Tuesday 06 May 2025 00:43:03 +0000 (0:00:00.177) 0:00:06.536 *********** 2025-05-06 00:43:03.595596 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:03.595929 | orchestrator | 2025-05-06 00:43:03.596796 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:03.597630 | orchestrator | Tuesday 06 May 2025 00:43:03 +0000 (0:00:00.181) 0:00:06.717 *********** 2025-05-06 00:43:03.780142 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:03.780370 | orchestrator | 2025-05-06 00:43:03.781053 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:03.781608 | orchestrator | Tuesday 06 May 2025 00:43:03 +0000 (0:00:00.184) 0:00:06.901 *********** 2025-05-06 00:43:04.355085 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-06 00:43:04.355286 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-06 00:43:04.355317 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-06 00:43:04.355870 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-06 00:43:04.356685 | orchestrator | 2025-05-06 00:43:04.358406 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:04.358785 | orchestrator | Tuesday 06 May 2025 00:43:04 +0000 (0:00:00.575) 0:00:07.477 *********** 2025-05-06 00:43:04.535389 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:04.536459 | orchestrator | 2025-05-06 00:43:04.537129 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:04.537815 | orchestrator | Tuesday 06 May 2025 00:43:04 +0000 (0:00:00.178) 0:00:07.655 *********** 2025-05-06 00:43:04.716234 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:04.717253 | orchestrator | 2025-05-06 00:43:04.717441 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:04.718114 | orchestrator | Tuesday 06 May 2025 00:43:04 +0000 (0:00:00.182) 0:00:07.838 *********** 2025-05-06 00:43:04.900187 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:04.900405 | orchestrator | 2025-05-06 00:43:04.900893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:04.900946 | orchestrator | Tuesday 06 May 2025 00:43:04 +0000 (0:00:00.183) 0:00:08.022 *********** 2025-05-06 00:43:05.084737 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:05.085399 | orchestrator | 2025-05-06 00:43:05.085834 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-06 00:43:05.086443 | orchestrator | Tuesday 06 May 2025 00:43:05 +0000 (0:00:00.184) 0:00:08.206 *********** 2025-05-06 00:43:05.196261 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:05.196836 | orchestrator | 2025-05-06 00:43:05.197930 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-06 00:43:05.198735 | orchestrator | Tuesday 06 May 2025 00:43:05 +0000 (0:00:00.111) 0:00:08.318 *********** 2025-05-06 00:43:05.366564 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '83550523-1175-5b11-b232-63a45b36e32a'}}) 2025-05-06 00:43:05.367104 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2fbee355-69b3-5569-a73a-eae1d5356d34'}}) 2025-05-06 00:43:05.368196 | orchestrator | 2025-05-06 00:43:05.368999 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-06 00:43:05.369617 | orchestrator | Tuesday 06 May 2025 00:43:05 +0000 (0:00:00.170) 0:00:08.488 *********** 2025-05-06 00:43:07.406694 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'}) 2025-05-06 00:43:07.406905 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'}) 2025-05-06 00:43:07.407629 | orchestrator | 2025-05-06 00:43:07.408444 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-06 00:43:07.409050 | orchestrator | Tuesday 06 May 2025 00:43:07 +0000 (0:00:02.037) 0:00:10.526 *********** 2025-05-06 00:43:07.589623 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'})  2025-05-06 00:43:07.591139 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'})  2025-05-06 00:43:07.591920 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:07.594157 | orchestrator | 2025-05-06 00:43:07.595106 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-06 00:43:07.595755 | orchestrator | Tuesday 06 May 2025 00:43:07 +0000 (0:00:00.184) 0:00:10.710 *********** 2025-05-06 00:43:09.061642 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'}) 2025-05-06 00:43:09.062243 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'}) 2025-05-06 00:43:09.062289 | orchestrator | 2025-05-06 00:43:09.062316 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-06 00:43:09.062567 | orchestrator | Tuesday 06 May 2025 00:43:09 +0000 (0:00:01.469) 0:00:12.180 *********** 2025-05-06 00:43:09.221515 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'})  2025-05-06 00:43:09.223020 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'})  2025-05-06 00:43:09.225647 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:09.226830 | orchestrator | 2025-05-06 00:43:09.228026 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-06 00:43:09.228759 | orchestrator | Tuesday 06 May 2025 00:43:09 +0000 (0:00:00.161) 0:00:12.342 *********** 2025-05-06 00:43:09.357401 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:09.358104 | orchestrator | 2025-05-06 00:43:09.358571 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-06 00:43:09.359572 | orchestrator | Tuesday 06 May 2025 00:43:09 +0000 (0:00:00.136) 0:00:12.479 *********** 2025-05-06 00:43:09.527760 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'})  2025-05-06 00:43:09.531304 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'})  2025-05-06 00:43:09.532026 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:09.532763 | orchestrator | 2025-05-06 00:43:09.532961 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-06 00:43:09.533427 | orchestrator | Tuesday 06 May 2025 00:43:09 +0000 (0:00:00.168) 0:00:12.647 *********** 2025-05-06 00:43:09.671515 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:09.672473 | orchestrator | 2025-05-06 00:43:09.673464 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-06 00:43:09.675171 | orchestrator | Tuesday 06 May 2025 00:43:09 +0000 (0:00:00.146) 0:00:12.793 *********** 2025-05-06 00:43:09.833792 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'})  2025-05-06 00:43:09.834766 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'})  2025-05-06 00:43:09.838204 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:09.839175 | orchestrator | 2025-05-06 00:43:09.967336 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-06 00:43:09.967445 | orchestrator | Tuesday 06 May 2025 00:43:09 +0000 (0:00:00.161) 0:00:12.955 *********** 2025-05-06 00:43:09.967478 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:09.967867 | orchestrator | 2025-05-06 00:43:09.969770 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-06 00:43:09.970255 | orchestrator | Tuesday 06 May 2025 00:43:09 +0000 (0:00:00.134) 0:00:13.089 *********** 2025-05-06 00:43:10.280100 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'})  2025-05-06 00:43:10.280364 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'})  2025-05-06 00:43:10.281549 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:10.282450 | orchestrator | 2025-05-06 00:43:10.284871 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-06 00:43:10.290102 | orchestrator | Tuesday 06 May 2025 00:43:10 +0000 (0:00:00.312) 0:00:13.401 *********** 2025-05-06 00:43:10.415363 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:43:10.416810 | orchestrator | 2025-05-06 00:43:10.416855 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-06 00:43:10.417734 | orchestrator | Tuesday 06 May 2025 00:43:10 +0000 (0:00:00.135) 0:00:13.537 *********** 2025-05-06 00:43:10.584340 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'})  2025-05-06 00:43:10.584551 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'})  2025-05-06 00:43:10.585511 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:10.587275 | orchestrator | 2025-05-06 00:43:10.588184 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-06 00:43:10.588852 | orchestrator | Tuesday 06 May 2025 00:43:10 +0000 (0:00:00.166) 0:00:13.703 *********** 2025-05-06 00:43:10.751289 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'})  2025-05-06 00:43:10.755276 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'})  2025-05-06 00:43:10.761042 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:10.761428 | orchestrator | 2025-05-06 00:43:10.762205 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-06 00:43:10.762996 | orchestrator | Tuesday 06 May 2025 00:43:10 +0000 (0:00:00.169) 0:00:13.873 *********** 2025-05-06 00:43:10.908451 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'})  2025-05-06 00:43:10.909192 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'})  2025-05-06 00:43:10.909736 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:10.910597 | orchestrator | 2025-05-06 00:43:10.911420 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-06 00:43:10.913114 | orchestrator | Tuesday 06 May 2025 00:43:10 +0000 (0:00:00.156) 0:00:14.029 *********** 2025-05-06 00:43:11.035810 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:11.036027 | orchestrator | 2025-05-06 00:43:11.036536 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-06 00:43:11.036956 | orchestrator | Tuesday 06 May 2025 00:43:11 +0000 (0:00:00.127) 0:00:14.157 *********** 2025-05-06 00:43:11.181120 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:11.181366 | orchestrator | 2025-05-06 00:43:11.182062 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-06 00:43:11.182215 | orchestrator | Tuesday 06 May 2025 00:43:11 +0000 (0:00:00.146) 0:00:14.303 *********** 2025-05-06 00:43:11.318403 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:11.318636 | orchestrator | 2025-05-06 00:43:11.319188 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-06 00:43:11.319770 | orchestrator | Tuesday 06 May 2025 00:43:11 +0000 (0:00:00.136) 0:00:14.440 *********** 2025-05-06 00:43:11.452354 | orchestrator | ok: [testbed-node-3] => { 2025-05-06 00:43:11.452776 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-06 00:43:11.453420 | orchestrator | } 2025-05-06 00:43:11.454412 | orchestrator | 2025-05-06 00:43:11.455435 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-06 00:43:11.456270 | orchestrator | Tuesday 06 May 2025 00:43:11 +0000 (0:00:00.133) 0:00:14.573 *********** 2025-05-06 00:43:11.586288 | orchestrator | ok: [testbed-node-3] => { 2025-05-06 00:43:11.587407 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-06 00:43:11.588226 | orchestrator | } 2025-05-06 00:43:11.589190 | orchestrator | 2025-05-06 00:43:11.589989 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-06 00:43:11.591208 | orchestrator | Tuesday 06 May 2025 00:43:11 +0000 (0:00:00.133) 0:00:14.707 *********** 2025-05-06 00:43:11.725375 | orchestrator | ok: [testbed-node-3] => { 2025-05-06 00:43:11.726151 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-06 00:43:11.726198 | orchestrator | } 2025-05-06 00:43:11.726737 | orchestrator | 2025-05-06 00:43:11.727449 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-06 00:43:11.728105 | orchestrator | Tuesday 06 May 2025 00:43:11 +0000 (0:00:00.138) 0:00:14.846 *********** 2025-05-06 00:43:12.553296 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:43:12.554195 | orchestrator | 2025-05-06 00:43:12.556284 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-06 00:43:13.083214 | orchestrator | Tuesday 06 May 2025 00:43:12 +0000 (0:00:00.827) 0:00:15.673 *********** 2025-05-06 00:43:13.083411 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:43:13.083504 | orchestrator | 2025-05-06 00:43:13.083533 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-06 00:43:13.084044 | orchestrator | Tuesday 06 May 2025 00:43:13 +0000 (0:00:00.528) 0:00:16.201 *********** 2025-05-06 00:43:13.580938 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:43:13.581530 | orchestrator | 2025-05-06 00:43:13.582005 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-06 00:43:13.583199 | orchestrator | Tuesday 06 May 2025 00:43:13 +0000 (0:00:00.499) 0:00:16.701 *********** 2025-05-06 00:43:13.718282 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:43:13.718959 | orchestrator | 2025-05-06 00:43:13.719433 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-06 00:43:13.719924 | orchestrator | Tuesday 06 May 2025 00:43:13 +0000 (0:00:00.138) 0:00:16.840 *********** 2025-05-06 00:43:13.838562 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:13.839214 | orchestrator | 2025-05-06 00:43:13.840350 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-06 00:43:13.840865 | orchestrator | Tuesday 06 May 2025 00:43:13 +0000 (0:00:00.119) 0:00:16.959 *********** 2025-05-06 00:43:13.947879 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:13.948456 | orchestrator | 2025-05-06 00:43:13.948838 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-06 00:43:13.949624 | orchestrator | Tuesday 06 May 2025 00:43:13 +0000 (0:00:00.110) 0:00:17.069 *********** 2025-05-06 00:43:14.088908 | orchestrator | ok: [testbed-node-3] => { 2025-05-06 00:43:14.089231 | orchestrator |  "vgs_report": { 2025-05-06 00:43:14.089627 | orchestrator |  "vg": [] 2025-05-06 00:43:14.089870 | orchestrator |  } 2025-05-06 00:43:14.090581 | orchestrator | } 2025-05-06 00:43:14.091015 | orchestrator | 2025-05-06 00:43:14.091443 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-06 00:43:14.092029 | orchestrator | Tuesday 06 May 2025 00:43:14 +0000 (0:00:00.140) 0:00:17.210 *********** 2025-05-06 00:43:14.229430 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:14.230928 | orchestrator | 2025-05-06 00:43:14.232273 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-06 00:43:14.233279 | orchestrator | Tuesday 06 May 2025 00:43:14 +0000 (0:00:00.138) 0:00:17.349 *********** 2025-05-06 00:43:14.370458 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:14.371753 | orchestrator | 2025-05-06 00:43:14.372267 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-06 00:43:14.373504 | orchestrator | Tuesday 06 May 2025 00:43:14 +0000 (0:00:00.142) 0:00:17.491 *********** 2025-05-06 00:43:14.513413 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:14.514943 | orchestrator | 2025-05-06 00:43:14.515918 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-06 00:43:14.516431 | orchestrator | Tuesday 06 May 2025 00:43:14 +0000 (0:00:00.139) 0:00:17.631 *********** 2025-05-06 00:43:14.662250 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:14.663547 | orchestrator | 2025-05-06 00:43:14.663600 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-06 00:43:14.665020 | orchestrator | Tuesday 06 May 2025 00:43:14 +0000 (0:00:00.151) 0:00:17.782 *********** 2025-05-06 00:43:15.005202 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:15.005782 | orchestrator | 2025-05-06 00:43:15.006619 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-06 00:43:15.008488 | orchestrator | Tuesday 06 May 2025 00:43:15 +0000 (0:00:00.343) 0:00:18.126 *********** 2025-05-06 00:43:15.144120 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:15.144537 | orchestrator | 2025-05-06 00:43:15.144958 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-06 00:43:15.146173 | orchestrator | Tuesday 06 May 2025 00:43:15 +0000 (0:00:00.138) 0:00:18.265 *********** 2025-05-06 00:43:15.280129 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:15.280462 | orchestrator | 2025-05-06 00:43:15.281115 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-06 00:43:15.281993 | orchestrator | Tuesday 06 May 2025 00:43:15 +0000 (0:00:00.136) 0:00:18.401 *********** 2025-05-06 00:43:15.425016 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:15.425187 | orchestrator | 2025-05-06 00:43:15.426138 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-06 00:43:15.427874 | orchestrator | Tuesday 06 May 2025 00:43:15 +0000 (0:00:00.142) 0:00:18.543 *********** 2025-05-06 00:43:15.571476 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:15.571942 | orchestrator | 2025-05-06 00:43:15.572127 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-06 00:43:15.572924 | orchestrator | Tuesday 06 May 2025 00:43:15 +0000 (0:00:00.149) 0:00:18.692 *********** 2025-05-06 00:43:15.710148 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:15.710359 | orchestrator | 2025-05-06 00:43:15.711097 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-06 00:43:15.712172 | orchestrator | Tuesday 06 May 2025 00:43:15 +0000 (0:00:00.138) 0:00:18.831 *********** 2025-05-06 00:43:15.851509 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:15.852214 | orchestrator | 2025-05-06 00:43:15.852954 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-06 00:43:15.853807 | orchestrator | Tuesday 06 May 2025 00:43:15 +0000 (0:00:00.141) 0:00:18.973 *********** 2025-05-06 00:43:15.987219 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:15.989215 | orchestrator | 2025-05-06 00:43:16.137620 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-06 00:43:16.137742 | orchestrator | Tuesday 06 May 2025 00:43:15 +0000 (0:00:00.135) 0:00:19.108 *********** 2025-05-06 00:43:16.137780 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:16.138137 | orchestrator | 2025-05-06 00:43:16.138479 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-06 00:43:16.139534 | orchestrator | Tuesday 06 May 2025 00:43:16 +0000 (0:00:00.150) 0:00:19.259 *********** 2025-05-06 00:43:16.279946 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:16.280639 | orchestrator | 2025-05-06 00:43:16.280684 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-06 00:43:16.282116 | orchestrator | Tuesday 06 May 2025 00:43:16 +0000 (0:00:00.142) 0:00:19.401 *********** 2025-05-06 00:43:16.454355 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'})  2025-05-06 00:43:16.454990 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'})  2025-05-06 00:43:16.455843 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:16.458116 | orchestrator | 2025-05-06 00:43:16.458718 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-06 00:43:16.459341 | orchestrator | Tuesday 06 May 2025 00:43:16 +0000 (0:00:00.173) 0:00:19.575 *********** 2025-05-06 00:43:16.611619 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'})  2025-05-06 00:43:16.612234 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'})  2025-05-06 00:43:16.613067 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:16.613902 | orchestrator | 2025-05-06 00:43:16.614563 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-06 00:43:16.615152 | orchestrator | Tuesday 06 May 2025 00:43:16 +0000 (0:00:00.155) 0:00:19.730 *********** 2025-05-06 00:43:16.974440 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'})  2025-05-06 00:43:16.977109 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'})  2025-05-06 00:43:16.979369 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:16.979527 | orchestrator | 2025-05-06 00:43:16.980099 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-06 00:43:16.980390 | orchestrator | Tuesday 06 May 2025 00:43:16 +0000 (0:00:00.364) 0:00:20.095 *********** 2025-05-06 00:43:17.135558 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'})  2025-05-06 00:43:17.135737 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'})  2025-05-06 00:43:17.136096 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:17.136128 | orchestrator | 2025-05-06 00:43:17.136149 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-06 00:43:17.136372 | orchestrator | Tuesday 06 May 2025 00:43:17 +0000 (0:00:00.160) 0:00:20.256 *********** 2025-05-06 00:43:17.298466 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'})  2025-05-06 00:43:17.298896 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'})  2025-05-06 00:43:17.298937 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:17.299145 | orchestrator | 2025-05-06 00:43:17.299566 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-06 00:43:17.299929 | orchestrator | Tuesday 06 May 2025 00:43:17 +0000 (0:00:00.163) 0:00:20.419 *********** 2025-05-06 00:43:17.459350 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'})  2025-05-06 00:43:17.459601 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'})  2025-05-06 00:43:17.460821 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:17.462891 | orchestrator | 2025-05-06 00:43:17.463070 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-06 00:43:17.463118 | orchestrator | Tuesday 06 May 2025 00:43:17 +0000 (0:00:00.160) 0:00:20.580 *********** 2025-05-06 00:43:17.622229 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'})  2025-05-06 00:43:17.622463 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'})  2025-05-06 00:43:17.622900 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:17.623549 | orchestrator | 2025-05-06 00:43:17.624195 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-06 00:43:17.624639 | orchestrator | Tuesday 06 May 2025 00:43:17 +0000 (0:00:00.160) 0:00:20.741 *********** 2025-05-06 00:43:17.779574 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'})  2025-05-06 00:43:17.780468 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'})  2025-05-06 00:43:17.781638 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:17.783558 | orchestrator | 2025-05-06 00:43:17.783859 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-06 00:43:17.783916 | orchestrator | Tuesday 06 May 2025 00:43:17 +0000 (0:00:00.159) 0:00:20.900 *********** 2025-05-06 00:43:18.310581 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:43:18.313478 | orchestrator | 2025-05-06 00:43:18.313888 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-06 00:43:18.314888 | orchestrator | Tuesday 06 May 2025 00:43:18 +0000 (0:00:00.531) 0:00:21.432 *********** 2025-05-06 00:43:18.833317 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:43:18.833509 | orchestrator | 2025-05-06 00:43:18.833921 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-06 00:43:18.834445 | orchestrator | Tuesday 06 May 2025 00:43:18 +0000 (0:00:00.521) 0:00:21.954 *********** 2025-05-06 00:43:18.976489 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:43:18.976825 | orchestrator | 2025-05-06 00:43:18.977801 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-06 00:43:18.978382 | orchestrator | Tuesday 06 May 2025 00:43:18 +0000 (0:00:00.144) 0:00:22.098 *********** 2025-05-06 00:43:19.170636 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'vg_name': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'}) 2025-05-06 00:43:19.170856 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'vg_name': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'}) 2025-05-06 00:43:19.171009 | orchestrator | 2025-05-06 00:43:19.171627 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-06 00:43:19.172292 | orchestrator | Tuesday 06 May 2025 00:43:19 +0000 (0:00:00.193) 0:00:22.292 *********** 2025-05-06 00:43:19.335464 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'})  2025-05-06 00:43:19.337029 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'})  2025-05-06 00:43:19.339528 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:19.339579 | orchestrator | 2025-05-06 00:43:19.342107 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-06 00:43:19.342274 | orchestrator | Tuesday 06 May 2025 00:43:19 +0000 (0:00:00.164) 0:00:22.456 *********** 2025-05-06 00:43:19.666360 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'})  2025-05-06 00:43:19.666875 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'})  2025-05-06 00:43:19.667549 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:19.668199 | orchestrator | 2025-05-06 00:43:19.670635 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-06 00:43:19.853893 | orchestrator | Tuesday 06 May 2025 00:43:19 +0000 (0:00:00.329) 0:00:22.786 *********** 2025-05-06 00:43:19.854152 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'})  2025-05-06 00:43:19.854594 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'})  2025-05-06 00:43:19.855076 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:43:19.855654 | orchestrator | 2025-05-06 00:43:19.856410 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-06 00:43:19.857035 | orchestrator | Tuesday 06 May 2025 00:43:19 +0000 (0:00:00.188) 0:00:22.975 *********** 2025-05-06 00:43:20.530320 | orchestrator | ok: [testbed-node-3] => { 2025-05-06 00:43:20.530692 | orchestrator |  "lvm_report": { 2025-05-06 00:43:20.531654 | orchestrator |  "lv": [ 2025-05-06 00:43:20.532329 | orchestrator |  { 2025-05-06 00:43:20.534551 | orchestrator |  "lv_name": "osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34", 2025-05-06 00:43:20.535383 | orchestrator |  "vg_name": "ceph-2fbee355-69b3-5569-a73a-eae1d5356d34" 2025-05-06 00:43:20.536005 | orchestrator |  }, 2025-05-06 00:43:20.536430 | orchestrator |  { 2025-05-06 00:43:20.536873 | orchestrator |  "lv_name": "osd-block-83550523-1175-5b11-b232-63a45b36e32a", 2025-05-06 00:43:20.537099 | orchestrator |  "vg_name": "ceph-83550523-1175-5b11-b232-63a45b36e32a" 2025-05-06 00:43:20.537399 | orchestrator |  } 2025-05-06 00:43:20.538787 | orchestrator |  ], 2025-05-06 00:43:20.539569 | orchestrator |  "pv": [ 2025-05-06 00:43:20.539933 | orchestrator |  { 2025-05-06 00:43:20.540277 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-06 00:43:20.540677 | orchestrator |  "vg_name": "ceph-83550523-1175-5b11-b232-63a45b36e32a" 2025-05-06 00:43:20.541085 | orchestrator |  }, 2025-05-06 00:43:20.541272 | orchestrator |  { 2025-05-06 00:43:20.543257 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-06 00:43:20.544307 | orchestrator |  "vg_name": "ceph-2fbee355-69b3-5569-a73a-eae1d5356d34" 2025-05-06 00:43:20.544532 | orchestrator |  } 2025-05-06 00:43:20.544556 | orchestrator |  ] 2025-05-06 00:43:20.544575 | orchestrator |  } 2025-05-06 00:43:20.545078 | orchestrator | } 2025-05-06 00:43:20.545285 | orchestrator | 2025-05-06 00:43:20.545585 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-06 00:43:20.546170 | orchestrator | 2025-05-06 00:43:20.546415 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-06 00:43:20.546619 | orchestrator | Tuesday 06 May 2025 00:43:20 +0000 (0:00:00.675) 0:00:23.651 *********** 2025-05-06 00:43:21.085808 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-06 00:43:21.086501 | orchestrator | 2025-05-06 00:43:21.086567 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-06 00:43:21.090193 | orchestrator | Tuesday 06 May 2025 00:43:21 +0000 (0:00:00.554) 0:00:24.206 *********** 2025-05-06 00:43:21.309524 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:43:21.310216 | orchestrator | 2025-05-06 00:43:21.310720 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:21.311589 | orchestrator | Tuesday 06 May 2025 00:43:21 +0000 (0:00:00.224) 0:00:24.430 *********** 2025-05-06 00:43:21.772541 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-06 00:43:21.773479 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-06 00:43:21.776512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-06 00:43:21.776619 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-06 00:43:21.776668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-06 00:43:21.777571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-06 00:43:21.778482 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-06 00:43:21.778931 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-06 00:43:21.779575 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-06 00:43:21.780119 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-06 00:43:21.780869 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-06 00:43:21.781666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-06 00:43:21.781793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-06 00:43:21.782860 | orchestrator | 2025-05-06 00:43:21.783653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:21.784203 | orchestrator | Tuesday 06 May 2025 00:43:21 +0000 (0:00:00.461) 0:00:24.892 *********** 2025-05-06 00:43:21.959233 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:21.960253 | orchestrator | 2025-05-06 00:43:21.961359 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:21.962877 | orchestrator | Tuesday 06 May 2025 00:43:21 +0000 (0:00:00.187) 0:00:25.080 *********** 2025-05-06 00:43:22.158913 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:22.160143 | orchestrator | 2025-05-06 00:43:22.160311 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:22.160561 | orchestrator | Tuesday 06 May 2025 00:43:22 +0000 (0:00:00.198) 0:00:25.278 *********** 2025-05-06 00:43:22.350504 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:22.351200 | orchestrator | 2025-05-06 00:43:22.351563 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:22.547049 | orchestrator | Tuesday 06 May 2025 00:43:22 +0000 (0:00:00.191) 0:00:25.470 *********** 2025-05-06 00:43:22.547183 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:22.547630 | orchestrator | 2025-05-06 00:43:22.548856 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:22.551423 | orchestrator | Tuesday 06 May 2025 00:43:22 +0000 (0:00:00.197) 0:00:25.668 *********** 2025-05-06 00:43:22.741796 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:22.742075 | orchestrator | 2025-05-06 00:43:22.742115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:22.742662 | orchestrator | Tuesday 06 May 2025 00:43:22 +0000 (0:00:00.193) 0:00:25.861 *********** 2025-05-06 00:43:22.931645 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:22.932189 | orchestrator | 2025-05-06 00:43:22.932880 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:22.933738 | orchestrator | Tuesday 06 May 2025 00:43:22 +0000 (0:00:00.191) 0:00:26.053 *********** 2025-05-06 00:43:23.149717 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:23.150234 | orchestrator | 2025-05-06 00:43:23.150733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:23.151466 | orchestrator | Tuesday 06 May 2025 00:43:23 +0000 (0:00:00.217) 0:00:26.270 *********** 2025-05-06 00:43:23.741111 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:23.741290 | orchestrator | 2025-05-06 00:43:23.742280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:23.743302 | orchestrator | Tuesday 06 May 2025 00:43:23 +0000 (0:00:00.591) 0:00:26.861 *********** 2025-05-06 00:43:24.165818 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8) 2025-05-06 00:43:24.166089 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8) 2025-05-06 00:43:24.167173 | orchestrator | 2025-05-06 00:43:24.168019 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:24.169742 | orchestrator | Tuesday 06 May 2025 00:43:24 +0000 (0:00:00.424) 0:00:27.286 *********** 2025-05-06 00:43:24.610359 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c3e2c64f-9688-4cad-bb81-b3a7d150bd8b) 2025-05-06 00:43:24.611379 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c3e2c64f-9688-4cad-bb81-b3a7d150bd8b) 2025-05-06 00:43:24.612709 | orchestrator | 2025-05-06 00:43:24.613623 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:24.614071 | orchestrator | Tuesday 06 May 2025 00:43:24 +0000 (0:00:00.445) 0:00:27.731 *********** 2025-05-06 00:43:25.021310 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bc0c56a8-1377-4a36-857b-86c78b746055) 2025-05-06 00:43:25.021711 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bc0c56a8-1377-4a36-857b-86c78b746055) 2025-05-06 00:43:25.022136 | orchestrator | 2025-05-06 00:43:25.022579 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:25.023166 | orchestrator | Tuesday 06 May 2025 00:43:25 +0000 (0:00:00.409) 0:00:28.141 *********** 2025-05-06 00:43:25.448490 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_eefa0fb1-6e32-4be6-9371-3c36667f9eb4) 2025-05-06 00:43:25.449288 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_eefa0fb1-6e32-4be6-9371-3c36667f9eb4) 2025-05-06 00:43:25.449897 | orchestrator | 2025-05-06 00:43:25.450710 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:25.451448 | orchestrator | Tuesday 06 May 2025 00:43:25 +0000 (0:00:00.426) 0:00:28.568 *********** 2025-05-06 00:43:25.767562 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-06 00:43:25.768173 | orchestrator | 2025-05-06 00:43:25.769019 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:25.770005 | orchestrator | Tuesday 06 May 2025 00:43:25 +0000 (0:00:00.319) 0:00:28.887 *********** 2025-05-06 00:43:26.255891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-06 00:43:26.256084 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-06 00:43:26.256108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-06 00:43:26.256129 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-06 00:43:26.258280 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-06 00:43:26.258558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-06 00:43:26.258884 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-06 00:43:26.260424 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-06 00:43:26.262979 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-06 00:43:26.264160 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-06 00:43:26.264304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-06 00:43:26.265621 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-06 00:43:26.266178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-06 00:43:26.266371 | orchestrator | 2025-05-06 00:43:26.267846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:26.267880 | orchestrator | Tuesday 06 May 2025 00:43:26 +0000 (0:00:00.485) 0:00:29.373 *********** 2025-05-06 00:43:26.472288 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:26.472552 | orchestrator | 2025-05-06 00:43:26.473300 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:26.473790 | orchestrator | Tuesday 06 May 2025 00:43:26 +0000 (0:00:00.220) 0:00:29.593 *********** 2025-05-06 00:43:26.699618 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:26.699835 | orchestrator | 2025-05-06 00:43:26.700438 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:26.701066 | orchestrator | Tuesday 06 May 2025 00:43:26 +0000 (0:00:00.227) 0:00:29.820 *********** 2025-05-06 00:43:27.268815 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:27.268994 | orchestrator | 2025-05-06 00:43:27.269019 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:27.271521 | orchestrator | Tuesday 06 May 2025 00:43:27 +0000 (0:00:00.566) 0:00:30.387 *********** 2025-05-06 00:43:27.462820 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:27.463394 | orchestrator | 2025-05-06 00:43:27.463747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:27.663466 | orchestrator | Tuesday 06 May 2025 00:43:27 +0000 (0:00:00.195) 0:00:30.583 *********** 2025-05-06 00:43:27.663638 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:27.663717 | orchestrator | 2025-05-06 00:43:27.664538 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:27.665914 | orchestrator | Tuesday 06 May 2025 00:43:27 +0000 (0:00:00.200) 0:00:30.784 *********** 2025-05-06 00:43:27.860211 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:27.860373 | orchestrator | 2025-05-06 00:43:27.861028 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:27.861399 | orchestrator | Tuesday 06 May 2025 00:43:27 +0000 (0:00:00.196) 0:00:30.981 *********** 2025-05-06 00:43:28.069252 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:28.070213 | orchestrator | 2025-05-06 00:43:28.070919 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:28.072003 | orchestrator | Tuesday 06 May 2025 00:43:28 +0000 (0:00:00.208) 0:00:31.190 *********** 2025-05-06 00:43:28.277538 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:28.278176 | orchestrator | 2025-05-06 00:43:28.279057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:28.279983 | orchestrator | Tuesday 06 May 2025 00:43:28 +0000 (0:00:00.208) 0:00:31.398 *********** 2025-05-06 00:43:28.955923 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-06 00:43:28.956249 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-06 00:43:28.956280 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-06 00:43:28.956302 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-06 00:43:28.958221 | orchestrator | 2025-05-06 00:43:28.958926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:28.959578 | orchestrator | Tuesday 06 May 2025 00:43:28 +0000 (0:00:00.674) 0:00:32.073 *********** 2025-05-06 00:43:29.148423 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:29.148871 | orchestrator | 2025-05-06 00:43:29.149588 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:29.150438 | orchestrator | Tuesday 06 May 2025 00:43:29 +0000 (0:00:00.196) 0:00:32.269 *********** 2025-05-06 00:43:29.338278 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:29.338460 | orchestrator | 2025-05-06 00:43:29.338742 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:29.339452 | orchestrator | Tuesday 06 May 2025 00:43:29 +0000 (0:00:00.190) 0:00:32.459 *********** 2025-05-06 00:43:29.532512 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:29.534217 | orchestrator | 2025-05-06 00:43:29.534839 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:29.741853 | orchestrator | Tuesday 06 May 2025 00:43:29 +0000 (0:00:00.191) 0:00:32.651 *********** 2025-05-06 00:43:29.742091 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:29.742183 | orchestrator | 2025-05-06 00:43:29.742230 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-06 00:43:29.742493 | orchestrator | Tuesday 06 May 2025 00:43:29 +0000 (0:00:00.211) 0:00:32.862 *********** 2025-05-06 00:43:30.086460 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:30.086867 | orchestrator | 2025-05-06 00:43:30.088172 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-06 00:43:30.091165 | orchestrator | Tuesday 06 May 2025 00:43:30 +0000 (0:00:00.342) 0:00:33.205 *********** 2025-05-06 00:43:30.282606 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8a0f4265-dd5d-556c-ac35-a800ef93314e'}}) 2025-05-06 00:43:30.284221 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '108592b4-5156-5470-952e-be389a9738cf'}}) 2025-05-06 00:43:30.284993 | orchestrator | 2025-05-06 00:43:30.287110 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-06 00:43:32.170288 | orchestrator | Tuesday 06 May 2025 00:43:30 +0000 (0:00:00.198) 0:00:33.403 *********** 2025-05-06 00:43:32.170432 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'}) 2025-05-06 00:43:32.170622 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'}) 2025-05-06 00:43:32.171166 | orchestrator | 2025-05-06 00:43:32.171828 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-06 00:43:32.173749 | orchestrator | Tuesday 06 May 2025 00:43:32 +0000 (0:00:01.885) 0:00:35.289 *********** 2025-05-06 00:43:32.353302 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'})  2025-05-06 00:43:32.353534 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'})  2025-05-06 00:43:32.353586 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:32.353779 | orchestrator | 2025-05-06 00:43:32.354466 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-06 00:43:32.354679 | orchestrator | Tuesday 06 May 2025 00:43:32 +0000 (0:00:00.184) 0:00:35.474 *********** 2025-05-06 00:43:33.746191 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'}) 2025-05-06 00:43:33.747016 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'}) 2025-05-06 00:43:33.747879 | orchestrator | 2025-05-06 00:43:33.748718 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-06 00:43:33.748756 | orchestrator | Tuesday 06 May 2025 00:43:33 +0000 (0:00:01.391) 0:00:36.865 *********** 2025-05-06 00:43:33.919401 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'})  2025-05-06 00:43:33.921015 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'})  2025-05-06 00:43:33.921864 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:33.922992 | orchestrator | 2025-05-06 00:43:33.923352 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-06 00:43:33.924491 | orchestrator | Tuesday 06 May 2025 00:43:33 +0000 (0:00:00.173) 0:00:37.039 *********** 2025-05-06 00:43:34.074911 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:34.076055 | orchestrator | 2025-05-06 00:43:34.076465 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-06 00:43:34.077625 | orchestrator | Tuesday 06 May 2025 00:43:34 +0000 (0:00:00.157) 0:00:37.196 *********** 2025-05-06 00:43:34.236503 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'})  2025-05-06 00:43:34.237016 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'})  2025-05-06 00:43:34.238128 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:34.239594 | orchestrator | 2025-05-06 00:43:34.240717 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-06 00:43:34.240928 | orchestrator | Tuesday 06 May 2025 00:43:34 +0000 (0:00:00.160) 0:00:37.357 *********** 2025-05-06 00:43:34.380710 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:34.382007 | orchestrator | 2025-05-06 00:43:34.382760 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-06 00:43:34.384753 | orchestrator | Tuesday 06 May 2025 00:43:34 +0000 (0:00:00.144) 0:00:37.501 *********** 2025-05-06 00:43:34.705870 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'})  2025-05-06 00:43:34.706377 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'})  2025-05-06 00:43:34.708488 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:34.846473 | orchestrator | 2025-05-06 00:43:34.846604 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-06 00:43:34.846627 | orchestrator | Tuesday 06 May 2025 00:43:34 +0000 (0:00:00.323) 0:00:37.824 *********** 2025-05-06 00:43:34.846664 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:34.847588 | orchestrator | 2025-05-06 00:43:34.848368 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-06 00:43:34.849763 | orchestrator | Tuesday 06 May 2025 00:43:34 +0000 (0:00:00.142) 0:00:37.967 *********** 2025-05-06 00:43:35.018491 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'})  2025-05-06 00:43:35.020440 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'})  2025-05-06 00:43:35.022344 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:35.023483 | orchestrator | 2025-05-06 00:43:35.024220 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-06 00:43:35.024987 | orchestrator | Tuesday 06 May 2025 00:43:35 +0000 (0:00:00.171) 0:00:38.139 *********** 2025-05-06 00:43:35.163265 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:43:35.164295 | orchestrator | 2025-05-06 00:43:35.164886 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-06 00:43:35.165605 | orchestrator | Tuesday 06 May 2025 00:43:35 +0000 (0:00:00.144) 0:00:38.284 *********** 2025-05-06 00:43:35.327025 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'})  2025-05-06 00:43:35.327229 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'})  2025-05-06 00:43:35.328059 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:35.328170 | orchestrator | 2025-05-06 00:43:35.328686 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-06 00:43:35.329059 | orchestrator | Tuesday 06 May 2025 00:43:35 +0000 (0:00:00.163) 0:00:38.447 *********** 2025-05-06 00:43:35.489307 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'})  2025-05-06 00:43:35.489890 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'})  2025-05-06 00:43:35.491025 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:35.492502 | orchestrator | 2025-05-06 00:43:35.494003 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-06 00:43:35.662801 | orchestrator | Tuesday 06 May 2025 00:43:35 +0000 (0:00:00.163) 0:00:38.610 *********** 2025-05-06 00:43:35.663022 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'})  2025-05-06 00:43:35.665415 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'})  2025-05-06 00:43:35.665446 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:35.665471 | orchestrator | 2025-05-06 00:43:35.801302 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-06 00:43:35.801404 | orchestrator | Tuesday 06 May 2025 00:43:35 +0000 (0:00:00.170) 0:00:38.781 *********** 2025-05-06 00:43:35.801435 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:35.801577 | orchestrator | 2025-05-06 00:43:35.802406 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-06 00:43:35.803177 | orchestrator | Tuesday 06 May 2025 00:43:35 +0000 (0:00:00.141) 0:00:38.922 *********** 2025-05-06 00:43:35.945508 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:35.946325 | orchestrator | 2025-05-06 00:43:35.946371 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-06 00:43:35.947064 | orchestrator | Tuesday 06 May 2025 00:43:35 +0000 (0:00:00.143) 0:00:39.066 *********** 2025-05-06 00:43:36.101438 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:36.104387 | orchestrator | 2025-05-06 00:43:36.105175 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-06 00:43:36.105502 | orchestrator | Tuesday 06 May 2025 00:43:36 +0000 (0:00:00.156) 0:00:39.222 *********** 2025-05-06 00:43:36.246369 | orchestrator | ok: [testbed-node-4] => { 2025-05-06 00:43:36.246911 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-06 00:43:36.247994 | orchestrator | } 2025-05-06 00:43:36.249419 | orchestrator | 2025-05-06 00:43:36.250000 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-06 00:43:36.250918 | orchestrator | Tuesday 06 May 2025 00:43:36 +0000 (0:00:00.145) 0:00:39.368 *********** 2025-05-06 00:43:36.618550 | orchestrator | ok: [testbed-node-4] => { 2025-05-06 00:43:36.618772 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-06 00:43:36.622277 | orchestrator | } 2025-05-06 00:43:36.623386 | orchestrator | 2025-05-06 00:43:36.623421 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-06 00:43:36.624575 | orchestrator | Tuesday 06 May 2025 00:43:36 +0000 (0:00:00.371) 0:00:39.739 *********** 2025-05-06 00:43:36.778142 | orchestrator | ok: [testbed-node-4] => { 2025-05-06 00:43:36.778744 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-06 00:43:36.779791 | orchestrator | } 2025-05-06 00:43:36.779842 | orchestrator | 2025-05-06 00:43:36.780406 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-06 00:43:36.780626 | orchestrator | Tuesday 06 May 2025 00:43:36 +0000 (0:00:00.160) 0:00:39.899 *********** 2025-05-06 00:43:37.279665 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:43:37.281033 | orchestrator | 2025-05-06 00:43:37.281658 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-06 00:43:37.282646 | orchestrator | Tuesday 06 May 2025 00:43:37 +0000 (0:00:00.501) 0:00:40.401 *********** 2025-05-06 00:43:37.844985 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:43:37.845169 | orchestrator | 2025-05-06 00:43:37.847059 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-06 00:43:37.848042 | orchestrator | Tuesday 06 May 2025 00:43:37 +0000 (0:00:00.564) 0:00:40.966 *********** 2025-05-06 00:43:38.397026 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:43:38.397201 | orchestrator | 2025-05-06 00:43:38.397901 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-06 00:43:38.400711 | orchestrator | Tuesday 06 May 2025 00:43:38 +0000 (0:00:00.549) 0:00:41.515 *********** 2025-05-06 00:43:38.551483 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:43:38.553551 | orchestrator | 2025-05-06 00:43:38.553596 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-06 00:43:38.680990 | orchestrator | Tuesday 06 May 2025 00:43:38 +0000 (0:00:00.155) 0:00:41.671 *********** 2025-05-06 00:43:38.681167 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:38.681245 | orchestrator | 2025-05-06 00:43:38.681267 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-06 00:43:38.681639 | orchestrator | Tuesday 06 May 2025 00:43:38 +0000 (0:00:00.130) 0:00:41.802 *********** 2025-05-06 00:43:38.814454 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:38.954723 | orchestrator | 2025-05-06 00:43:38.954821 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-06 00:43:38.954839 | orchestrator | Tuesday 06 May 2025 00:43:38 +0000 (0:00:00.132) 0:00:41.934 *********** 2025-05-06 00:43:38.954868 | orchestrator | ok: [testbed-node-4] => { 2025-05-06 00:43:38.955105 | orchestrator |  "vgs_report": { 2025-05-06 00:43:38.955494 | orchestrator |  "vg": [] 2025-05-06 00:43:38.956044 | orchestrator |  } 2025-05-06 00:43:38.956412 | orchestrator | } 2025-05-06 00:43:38.958090 | orchestrator | 2025-05-06 00:43:39.098488 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-06 00:43:39.098623 | orchestrator | Tuesday 06 May 2025 00:43:38 +0000 (0:00:00.141) 0:00:42.076 *********** 2025-05-06 00:43:39.098659 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:39.099544 | orchestrator | 2025-05-06 00:43:39.100279 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-06 00:43:39.101081 | orchestrator | Tuesday 06 May 2025 00:43:39 +0000 (0:00:00.143) 0:00:42.219 *********** 2025-05-06 00:43:39.235735 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:39.236493 | orchestrator | 2025-05-06 00:43:39.237706 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-06 00:43:39.238465 | orchestrator | Tuesday 06 May 2025 00:43:39 +0000 (0:00:00.137) 0:00:42.357 *********** 2025-05-06 00:43:39.538328 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:39.540070 | orchestrator | 2025-05-06 00:43:39.540509 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-06 00:43:39.541365 | orchestrator | Tuesday 06 May 2025 00:43:39 +0000 (0:00:00.298) 0:00:42.655 *********** 2025-05-06 00:43:39.689681 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:39.691389 | orchestrator | 2025-05-06 00:43:39.692101 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-06 00:43:39.693254 | orchestrator | Tuesday 06 May 2025 00:43:39 +0000 (0:00:00.155) 0:00:42.811 *********** 2025-05-06 00:43:39.844180 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:39.845128 | orchestrator | 2025-05-06 00:43:39.846490 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-06 00:43:39.846927 | orchestrator | Tuesday 06 May 2025 00:43:39 +0000 (0:00:00.152) 0:00:42.963 *********** 2025-05-06 00:43:40.004867 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:40.005170 | orchestrator | 2025-05-06 00:43:40.005849 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-06 00:43:40.008303 | orchestrator | Tuesday 06 May 2025 00:43:39 +0000 (0:00:00.161) 0:00:43.125 *********** 2025-05-06 00:43:40.151349 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:40.151657 | orchestrator | 2025-05-06 00:43:40.153158 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-06 00:43:40.153581 | orchestrator | Tuesday 06 May 2025 00:43:40 +0000 (0:00:00.146) 0:00:43.272 *********** 2025-05-06 00:43:40.296804 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:40.297113 | orchestrator | 2025-05-06 00:43:40.297595 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-06 00:43:40.298406 | orchestrator | Tuesday 06 May 2025 00:43:40 +0000 (0:00:00.145) 0:00:43.417 *********** 2025-05-06 00:43:40.453909 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:40.454426 | orchestrator | 2025-05-06 00:43:40.456440 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-06 00:43:40.457057 | orchestrator | Tuesday 06 May 2025 00:43:40 +0000 (0:00:00.155) 0:00:43.573 *********** 2025-05-06 00:43:40.594414 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:40.595048 | orchestrator | 2025-05-06 00:43:40.595094 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-06 00:43:40.595889 | orchestrator | Tuesday 06 May 2025 00:43:40 +0000 (0:00:00.141) 0:00:43.714 *********** 2025-05-06 00:43:40.727339 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:40.727794 | orchestrator | 2025-05-06 00:43:40.729114 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-06 00:43:40.729316 | orchestrator | Tuesday 06 May 2025 00:43:40 +0000 (0:00:00.134) 0:00:43.849 *********** 2025-05-06 00:43:40.878646 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:40.879114 | orchestrator | 2025-05-06 00:43:40.879164 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-06 00:43:40.879607 | orchestrator | Tuesday 06 May 2025 00:43:40 +0000 (0:00:00.150) 0:00:43.999 *********** 2025-05-06 00:43:41.023803 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:41.024128 | orchestrator | 2025-05-06 00:43:41.024554 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-06 00:43:41.025480 | orchestrator | Tuesday 06 May 2025 00:43:41 +0000 (0:00:00.145) 0:00:44.145 *********** 2025-05-06 00:43:41.169811 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:41.171703 | orchestrator | 2025-05-06 00:43:41.172440 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-06 00:43:41.173226 | orchestrator | Tuesday 06 May 2025 00:43:41 +0000 (0:00:00.143) 0:00:44.289 *********** 2025-05-06 00:43:41.544923 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'})  2025-05-06 00:43:41.545169 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'})  2025-05-06 00:43:41.546117 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:41.549414 | orchestrator | 2025-05-06 00:43:41.721234 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-06 00:43:41.721365 | orchestrator | Tuesday 06 May 2025 00:43:41 +0000 (0:00:00.375) 0:00:44.664 *********** 2025-05-06 00:43:41.721401 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'})  2025-05-06 00:43:41.721574 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'})  2025-05-06 00:43:41.721604 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:41.722355 | orchestrator | 2025-05-06 00:43:41.722679 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-06 00:43:41.723076 | orchestrator | Tuesday 06 May 2025 00:43:41 +0000 (0:00:00.177) 0:00:44.842 *********** 2025-05-06 00:43:41.892516 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'})  2025-05-06 00:43:41.893222 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'})  2025-05-06 00:43:41.894455 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:41.895710 | orchestrator | 2025-05-06 00:43:41.896627 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-06 00:43:41.897707 | orchestrator | Tuesday 06 May 2025 00:43:41 +0000 (0:00:00.171) 0:00:45.013 *********** 2025-05-06 00:43:42.085604 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'})  2025-05-06 00:43:42.085843 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'})  2025-05-06 00:43:42.086713 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:42.088694 | orchestrator | 2025-05-06 00:43:42.089145 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-06 00:43:42.089966 | orchestrator | Tuesday 06 May 2025 00:43:42 +0000 (0:00:00.192) 0:00:45.206 *********** 2025-05-06 00:43:42.249209 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'})  2025-05-06 00:43:42.249415 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'})  2025-05-06 00:43:42.250858 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:42.251436 | orchestrator | 2025-05-06 00:43:42.252411 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-06 00:43:42.253574 | orchestrator | Tuesday 06 May 2025 00:43:42 +0000 (0:00:00.164) 0:00:45.370 *********** 2025-05-06 00:43:42.422366 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'})  2025-05-06 00:43:42.422544 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'})  2025-05-06 00:43:42.422574 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:42.422968 | orchestrator | 2025-05-06 00:43:42.423279 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-06 00:43:42.423309 | orchestrator | Tuesday 06 May 2025 00:43:42 +0000 (0:00:00.172) 0:00:45.543 *********** 2025-05-06 00:43:42.607861 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'})  2025-05-06 00:43:42.608756 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'})  2025-05-06 00:43:42.609599 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:42.610250 | orchestrator | 2025-05-06 00:43:42.612424 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-06 00:43:42.784806 | orchestrator | Tuesday 06 May 2025 00:43:42 +0000 (0:00:00.185) 0:00:45.728 *********** 2025-05-06 00:43:42.784973 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'})  2025-05-06 00:43:42.786767 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'})  2025-05-06 00:43:42.786804 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:42.787609 | orchestrator | 2025-05-06 00:43:42.788544 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-06 00:43:42.789558 | orchestrator | Tuesday 06 May 2025 00:43:42 +0000 (0:00:00.176) 0:00:45.905 *********** 2025-05-06 00:43:43.316383 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:43:43.317402 | orchestrator | 2025-05-06 00:43:43.317601 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-06 00:43:43.318519 | orchestrator | Tuesday 06 May 2025 00:43:43 +0000 (0:00:00.531) 0:00:46.437 *********** 2025-05-06 00:43:43.857132 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:43:43.857536 | orchestrator | 2025-05-06 00:43:43.857580 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-06 00:43:43.857815 | orchestrator | Tuesday 06 May 2025 00:43:43 +0000 (0:00:00.540) 0:00:46.977 *********** 2025-05-06 00:43:44.192432 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:43:44.192796 | orchestrator | 2025-05-06 00:43:44.193953 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-06 00:43:44.194709 | orchestrator | Tuesday 06 May 2025 00:43:44 +0000 (0:00:00.335) 0:00:47.313 *********** 2025-05-06 00:43:44.376557 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'vg_name': 'ceph-108592b4-5156-5470-952e-be389a9738cf'}) 2025-05-06 00:43:44.377567 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'vg_name': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'}) 2025-05-06 00:43:44.377585 | orchestrator | 2025-05-06 00:43:44.380959 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-06 00:43:44.381447 | orchestrator | Tuesday 06 May 2025 00:43:44 +0000 (0:00:00.182) 0:00:47.496 *********** 2025-05-06 00:43:44.551766 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'})  2025-05-06 00:43:44.552041 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'})  2025-05-06 00:43:44.553637 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:44.555291 | orchestrator | 2025-05-06 00:43:44.557208 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-06 00:43:44.750437 | orchestrator | Tuesday 06 May 2025 00:43:44 +0000 (0:00:00.174) 0:00:47.671 *********** 2025-05-06 00:43:44.750566 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'})  2025-05-06 00:43:44.751272 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'})  2025-05-06 00:43:44.751303 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:44.752286 | orchestrator | 2025-05-06 00:43:44.753228 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-06 00:43:44.753783 | orchestrator | Tuesday 06 May 2025 00:43:44 +0000 (0:00:00.199) 0:00:47.871 *********** 2025-05-06 00:43:44.936473 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'})  2025-05-06 00:43:44.936764 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'})  2025-05-06 00:43:44.937762 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:43:44.938635 | orchestrator | 2025-05-06 00:43:44.939669 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-06 00:43:44.940156 | orchestrator | Tuesday 06 May 2025 00:43:44 +0000 (0:00:00.185) 0:00:48.056 *********** 2025-05-06 00:43:45.851128 | orchestrator | ok: [testbed-node-4] => { 2025-05-06 00:43:45.851829 | orchestrator |  "lvm_report": { 2025-05-06 00:43:45.852648 | orchestrator |  "lv": [ 2025-05-06 00:43:45.853596 | orchestrator |  { 2025-05-06 00:43:45.854361 | orchestrator |  "lv_name": "osd-block-108592b4-5156-5470-952e-be389a9738cf", 2025-05-06 00:43:45.856026 | orchestrator |  "vg_name": "ceph-108592b4-5156-5470-952e-be389a9738cf" 2025-05-06 00:43:45.858124 | orchestrator |  }, 2025-05-06 00:43:45.858554 | orchestrator |  { 2025-05-06 00:43:45.859180 | orchestrator |  "lv_name": "osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e", 2025-05-06 00:43:45.860267 | orchestrator |  "vg_name": "ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e" 2025-05-06 00:43:45.861058 | orchestrator |  } 2025-05-06 00:43:45.861463 | orchestrator |  ], 2025-05-06 00:43:45.862331 | orchestrator |  "pv": [ 2025-05-06 00:43:45.862734 | orchestrator |  { 2025-05-06 00:43:45.863798 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-06 00:43:45.864527 | orchestrator |  "vg_name": "ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e" 2025-05-06 00:43:45.864589 | orchestrator |  }, 2025-05-06 00:43:45.864656 | orchestrator |  { 2025-05-06 00:43:45.865334 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-06 00:43:45.865725 | orchestrator |  "vg_name": "ceph-108592b4-5156-5470-952e-be389a9738cf" 2025-05-06 00:43:45.866946 | orchestrator |  } 2025-05-06 00:43:45.867087 | orchestrator |  ] 2025-05-06 00:43:45.867425 | orchestrator |  } 2025-05-06 00:43:45.867766 | orchestrator | } 2025-05-06 00:43:45.868130 | orchestrator | 2025-05-06 00:43:45.868767 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-06 00:43:45.869144 | orchestrator | 2025-05-06 00:43:45.869531 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-06 00:43:45.869789 | orchestrator | Tuesday 06 May 2025 00:43:45 +0000 (0:00:00.913) 0:00:48.970 *********** 2025-05-06 00:43:46.125499 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-06 00:43:46.125680 | orchestrator | 2025-05-06 00:43:46.126382 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-06 00:43:46.127794 | orchestrator | Tuesday 06 May 2025 00:43:46 +0000 (0:00:00.275) 0:00:49.246 *********** 2025-05-06 00:43:46.376529 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:43:46.839724 | orchestrator | 2025-05-06 00:43:46.839843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:46.839862 | orchestrator | Tuesday 06 May 2025 00:43:46 +0000 (0:00:00.245) 0:00:49.491 *********** 2025-05-06 00:43:46.839894 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-06 00:43:46.840591 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-06 00:43:46.840629 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-06 00:43:46.841244 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-06 00:43:46.841961 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-06 00:43:46.842660 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-06 00:43:46.842887 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-06 00:43:46.843483 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-06 00:43:46.843848 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-06 00:43:46.844311 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-06 00:43:46.844775 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-06 00:43:46.845126 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-06 00:43:46.846313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-06 00:43:46.846468 | orchestrator | 2025-05-06 00:43:46.846495 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:46.847006 | orchestrator | Tuesday 06 May 2025 00:43:46 +0000 (0:00:00.463) 0:00:49.955 *********** 2025-05-06 00:43:47.038880 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:47.039140 | orchestrator | 2025-05-06 00:43:47.039962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:47.041078 | orchestrator | Tuesday 06 May 2025 00:43:47 +0000 (0:00:00.204) 0:00:50.159 *********** 2025-05-06 00:43:47.248184 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:47.248365 | orchestrator | 2025-05-06 00:43:47.248664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:47.249475 | orchestrator | Tuesday 06 May 2025 00:43:47 +0000 (0:00:00.210) 0:00:50.370 *********** 2025-05-06 00:43:47.456691 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:47.457757 | orchestrator | 2025-05-06 00:43:47.457802 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:47.458229 | orchestrator | Tuesday 06 May 2025 00:43:47 +0000 (0:00:00.208) 0:00:50.578 *********** 2025-05-06 00:43:47.671199 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:47.673165 | orchestrator | 2025-05-06 00:43:47.673796 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:47.675155 | orchestrator | Tuesday 06 May 2025 00:43:47 +0000 (0:00:00.212) 0:00:50.790 *********** 2025-05-06 00:43:47.897628 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:47.897791 | orchestrator | 2025-05-06 00:43:47.898487 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:47.899187 | orchestrator | Tuesday 06 May 2025 00:43:47 +0000 (0:00:00.227) 0:00:51.018 *********** 2025-05-06 00:43:48.488040 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:48.488558 | orchestrator | 2025-05-06 00:43:48.489588 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:48.490386 | orchestrator | Tuesday 06 May 2025 00:43:48 +0000 (0:00:00.590) 0:00:51.608 *********** 2025-05-06 00:43:48.689468 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:48.690282 | orchestrator | 2025-05-06 00:43:48.692808 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:48.893483 | orchestrator | Tuesday 06 May 2025 00:43:48 +0000 (0:00:00.200) 0:00:51.809 *********** 2025-05-06 00:43:48.893624 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:48.895123 | orchestrator | 2025-05-06 00:43:48.896480 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:48.898428 | orchestrator | Tuesday 06 May 2025 00:43:48 +0000 (0:00:00.205) 0:00:52.014 *********** 2025-05-06 00:43:49.310552 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247) 2025-05-06 00:43:49.310740 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247) 2025-05-06 00:43:49.311672 | orchestrator | 2025-05-06 00:43:49.312057 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:49.312568 | orchestrator | Tuesday 06 May 2025 00:43:49 +0000 (0:00:00.415) 0:00:52.430 *********** 2025-05-06 00:43:49.747987 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9f4cae81-5600-43ad-ae81-4d2d3f64aa06) 2025-05-06 00:43:49.748549 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9f4cae81-5600-43ad-ae81-4d2d3f64aa06) 2025-05-06 00:43:49.748891 | orchestrator | 2025-05-06 00:43:49.751724 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:50.208841 | orchestrator | Tuesday 06 May 2025 00:43:49 +0000 (0:00:00.437) 0:00:52.867 *********** 2025-05-06 00:43:50.209050 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a5a4c6fa-807d-44c7-a556-c4522912d679) 2025-05-06 00:43:50.209417 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a5a4c6fa-807d-44c7-a556-c4522912d679) 2025-05-06 00:43:50.209449 | orchestrator | 2025-05-06 00:43:50.210503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:50.211123 | orchestrator | Tuesday 06 May 2025 00:43:50 +0000 (0:00:00.461) 0:00:53.329 *********** 2025-05-06 00:43:50.665731 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f2e4c6c8-e338-4410-96b4-d1d5dab5be16) 2025-05-06 00:43:50.666101 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f2e4c6c8-e338-4410-96b4-d1d5dab5be16) 2025-05-06 00:43:50.667291 | orchestrator | 2025-05-06 00:43:50.670383 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-06 00:43:50.671167 | orchestrator | Tuesday 06 May 2025 00:43:50 +0000 (0:00:00.454) 0:00:53.784 *********** 2025-05-06 00:43:51.005973 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-06 00:43:51.006244 | orchestrator | 2025-05-06 00:43:51.006708 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:51.008007 | orchestrator | Tuesday 06 May 2025 00:43:50 +0000 (0:00:00.341) 0:00:54.125 *********** 2025-05-06 00:43:51.495976 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-06 00:43:51.497141 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-06 00:43:51.499434 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-06 00:43:51.502734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-06 00:43:51.504473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-06 00:43:51.505094 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-06 00:43:51.505659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-06 00:43:51.506349 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-06 00:43:51.508842 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-06 00:43:51.509445 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-06 00:43:51.510510 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-06 00:43:51.511295 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-06 00:43:51.511804 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-06 00:43:51.512633 | orchestrator | 2025-05-06 00:43:51.513676 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:51.513960 | orchestrator | Tuesday 06 May 2025 00:43:51 +0000 (0:00:00.489) 0:00:54.615 *********** 2025-05-06 00:43:52.117768 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:52.118115 | orchestrator | 2025-05-06 00:43:52.123036 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:52.123549 | orchestrator | Tuesday 06 May 2025 00:43:52 +0000 (0:00:00.622) 0:00:55.237 *********** 2025-05-06 00:43:52.317699 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:52.318187 | orchestrator | 2025-05-06 00:43:52.320058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:52.320753 | orchestrator | Tuesday 06 May 2025 00:43:52 +0000 (0:00:00.202) 0:00:55.439 *********** 2025-05-06 00:43:52.539494 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:52.539687 | orchestrator | 2025-05-06 00:43:52.540840 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:52.541386 | orchestrator | Tuesday 06 May 2025 00:43:52 +0000 (0:00:00.221) 0:00:55.660 *********** 2025-05-06 00:43:52.739217 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:52.739402 | orchestrator | 2025-05-06 00:43:52.739665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:52.740617 | orchestrator | Tuesday 06 May 2025 00:43:52 +0000 (0:00:00.198) 0:00:55.859 *********** 2025-05-06 00:43:52.954902 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:52.955209 | orchestrator | 2025-05-06 00:43:52.956087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:52.960352 | orchestrator | Tuesday 06 May 2025 00:43:52 +0000 (0:00:00.216) 0:00:56.075 *********** 2025-05-06 00:43:53.155277 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:53.155852 | orchestrator | 2025-05-06 00:43:53.156480 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:53.157554 | orchestrator | Tuesday 06 May 2025 00:43:53 +0000 (0:00:00.200) 0:00:56.276 *********** 2025-05-06 00:43:53.355471 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:53.355733 | orchestrator | 2025-05-06 00:43:53.356825 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:53.358073 | orchestrator | Tuesday 06 May 2025 00:43:53 +0000 (0:00:00.200) 0:00:56.476 *********** 2025-05-06 00:43:53.562556 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:53.562743 | orchestrator | 2025-05-06 00:43:53.563865 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:53.567318 | orchestrator | Tuesday 06 May 2025 00:43:53 +0000 (0:00:00.206) 0:00:56.682 *********** 2025-05-06 00:43:54.431447 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-06 00:43:54.432785 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-06 00:43:54.433645 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-06 00:43:54.434951 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-06 00:43:54.435773 | orchestrator | 2025-05-06 00:43:54.441034 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:54.442262 | orchestrator | Tuesday 06 May 2025 00:43:54 +0000 (0:00:00.868) 0:00:57.551 *********** 2025-05-06 00:43:54.653572 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:54.654068 | orchestrator | 2025-05-06 00:43:54.658259 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:54.658718 | orchestrator | Tuesday 06 May 2025 00:43:54 +0000 (0:00:00.221) 0:00:57.773 *********** 2025-05-06 00:43:55.305372 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:55.306149 | orchestrator | 2025-05-06 00:43:55.307477 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:55.311349 | orchestrator | Tuesday 06 May 2025 00:43:55 +0000 (0:00:00.652) 0:00:58.425 *********** 2025-05-06 00:43:55.522492 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:55.522887 | orchestrator | 2025-05-06 00:43:55.523894 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-06 00:43:55.527023 | orchestrator | Tuesday 06 May 2025 00:43:55 +0000 (0:00:00.216) 0:00:58.642 *********** 2025-05-06 00:43:55.733397 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:55.734381 | orchestrator | 2025-05-06 00:43:55.736308 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-06 00:43:55.737250 | orchestrator | Tuesday 06 May 2025 00:43:55 +0000 (0:00:00.209) 0:00:58.852 *********** 2025-05-06 00:43:55.870107 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:55.871175 | orchestrator | 2025-05-06 00:43:55.871710 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-06 00:43:55.873181 | orchestrator | Tuesday 06 May 2025 00:43:55 +0000 (0:00:00.139) 0:00:58.992 *********** 2025-05-06 00:43:56.074590 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5100a9d2-ae69-5e7a-989d-a5d69986fee9'}}) 2025-05-06 00:43:56.074809 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '376b0c1a-f7d0-50df-9bf6-f05e021d85c5'}}) 2025-05-06 00:43:56.075828 | orchestrator | 2025-05-06 00:43:56.076599 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-06 00:43:56.077243 | orchestrator | Tuesday 06 May 2025 00:43:56 +0000 (0:00:00.203) 0:00:59.195 *********** 2025-05-06 00:43:57.920876 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'}) 2025-05-06 00:43:57.923843 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'}) 2025-05-06 00:43:57.924257 | orchestrator | 2025-05-06 00:43:57.924990 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-06 00:43:57.925689 | orchestrator | Tuesday 06 May 2025 00:43:57 +0000 (0:00:01.840) 0:01:01.035 *********** 2025-05-06 00:43:58.069307 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'})  2025-05-06 00:43:58.069880 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'})  2025-05-06 00:43:58.071229 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:58.072044 | orchestrator | 2025-05-06 00:43:58.073036 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-06 00:43:58.073985 | orchestrator | Tuesday 06 May 2025 00:43:58 +0000 (0:00:00.154) 0:01:01.190 *********** 2025-05-06 00:43:59.436645 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'}) 2025-05-06 00:43:59.437457 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'}) 2025-05-06 00:43:59.438585 | orchestrator | 2025-05-06 00:43:59.439655 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-06 00:43:59.441018 | orchestrator | Tuesday 06 May 2025 00:43:59 +0000 (0:00:01.359) 0:01:02.550 *********** 2025-05-06 00:43:59.618479 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'})  2025-05-06 00:43:59.619785 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'})  2025-05-06 00:43:59.621516 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:59.622748 | orchestrator | 2025-05-06 00:43:59.624273 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-06 00:43:59.625110 | orchestrator | Tuesday 06 May 2025 00:43:59 +0000 (0:00:00.189) 0:01:02.740 *********** 2025-05-06 00:43:59.980556 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:43:59.981082 | orchestrator | 2025-05-06 00:43:59.985309 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-06 00:44:00.153603 | orchestrator | Tuesday 06 May 2025 00:43:59 +0000 (0:00:00.359) 0:01:03.099 *********** 2025-05-06 00:44:00.153744 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'})  2025-05-06 00:44:00.155965 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'})  2025-05-06 00:44:00.156998 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:00.158771 | orchestrator | 2025-05-06 00:44:00.160248 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-06 00:44:00.160820 | orchestrator | Tuesday 06 May 2025 00:44:00 +0000 (0:00:00.174) 0:01:03.273 *********** 2025-05-06 00:44:00.311058 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:00.311245 | orchestrator | 2025-05-06 00:44:00.311276 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-06 00:44:00.312172 | orchestrator | Tuesday 06 May 2025 00:44:00 +0000 (0:00:00.155) 0:01:03.429 *********** 2025-05-06 00:44:00.494673 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'})  2025-05-06 00:44:00.497241 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'})  2025-05-06 00:44:00.497726 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:00.497773 | orchestrator | 2025-05-06 00:44:00.497808 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-06 00:44:00.498968 | orchestrator | Tuesday 06 May 2025 00:44:00 +0000 (0:00:00.185) 0:01:03.615 *********** 2025-05-06 00:44:00.639238 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:00.639897 | orchestrator | 2025-05-06 00:44:00.641737 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-06 00:44:00.642669 | orchestrator | Tuesday 06 May 2025 00:44:00 +0000 (0:00:00.144) 0:01:03.759 *********** 2025-05-06 00:44:00.809564 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'})  2025-05-06 00:44:00.810338 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'})  2025-05-06 00:44:00.812849 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:00.813623 | orchestrator | 2025-05-06 00:44:00.814461 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-06 00:44:00.815673 | orchestrator | Tuesday 06 May 2025 00:44:00 +0000 (0:00:00.169) 0:01:03.929 *********** 2025-05-06 00:44:00.967266 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:44:00.968190 | orchestrator | 2025-05-06 00:44:00.969134 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-06 00:44:00.970185 | orchestrator | Tuesday 06 May 2025 00:44:00 +0000 (0:00:00.154) 0:01:04.083 *********** 2025-05-06 00:44:01.128804 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'})  2025-05-06 00:44:01.130458 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'})  2025-05-06 00:44:01.133498 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:01.134624 | orchestrator | 2025-05-06 00:44:01.135729 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-06 00:44:01.136847 | orchestrator | Tuesday 06 May 2025 00:44:01 +0000 (0:00:00.165) 0:01:04.249 *********** 2025-05-06 00:44:01.303770 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'})  2025-05-06 00:44:01.305542 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'})  2025-05-06 00:44:01.307153 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:01.308476 | orchestrator | 2025-05-06 00:44:01.309874 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-06 00:44:01.310881 | orchestrator | Tuesday 06 May 2025 00:44:01 +0000 (0:00:00.175) 0:01:04.424 *********** 2025-05-06 00:44:01.477257 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'})  2025-05-06 00:44:01.478639 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'})  2025-05-06 00:44:01.479637 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:01.480346 | orchestrator | 2025-05-06 00:44:01.483184 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-06 00:44:01.617736 | orchestrator | Tuesday 06 May 2025 00:44:01 +0000 (0:00:00.171) 0:01:04.595 *********** 2025-05-06 00:44:01.617873 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:01.618134 | orchestrator | 2025-05-06 00:44:01.619152 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-06 00:44:01.620442 | orchestrator | Tuesday 06 May 2025 00:44:01 +0000 (0:00:00.139) 0:01:04.735 *********** 2025-05-06 00:44:02.022636 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:02.023332 | orchestrator | 2025-05-06 00:44:02.024022 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-06 00:44:02.024636 | orchestrator | Tuesday 06 May 2025 00:44:02 +0000 (0:00:00.407) 0:01:05.143 *********** 2025-05-06 00:44:02.162322 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:02.163337 | orchestrator | 2025-05-06 00:44:02.164629 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-06 00:44:02.165828 | orchestrator | Tuesday 06 May 2025 00:44:02 +0000 (0:00:00.139) 0:01:05.282 *********** 2025-05-06 00:44:02.326510 | orchestrator | ok: [testbed-node-5] => { 2025-05-06 00:44:02.327118 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-06 00:44:02.329505 | orchestrator | } 2025-05-06 00:44:02.330122 | orchestrator | 2025-05-06 00:44:02.331471 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-06 00:44:02.332453 | orchestrator | Tuesday 06 May 2025 00:44:02 +0000 (0:00:00.164) 0:01:05.446 *********** 2025-05-06 00:44:02.481201 | orchestrator | ok: [testbed-node-5] => { 2025-05-06 00:44:02.485890 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-06 00:44:02.486002 | orchestrator | } 2025-05-06 00:44:02.487622 | orchestrator | 2025-05-06 00:44:02.488312 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-06 00:44:02.488691 | orchestrator | Tuesday 06 May 2025 00:44:02 +0000 (0:00:00.152) 0:01:05.599 *********** 2025-05-06 00:44:02.629067 | orchestrator | ok: [testbed-node-5] => { 2025-05-06 00:44:02.629630 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-06 00:44:02.630603 | orchestrator | } 2025-05-06 00:44:02.631029 | orchestrator | 2025-05-06 00:44:02.631553 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-06 00:44:02.632113 | orchestrator | Tuesday 06 May 2025 00:44:02 +0000 (0:00:00.149) 0:01:05.749 *********** 2025-05-06 00:44:03.182252 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:44:03.183150 | orchestrator | 2025-05-06 00:44:03.184249 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-06 00:44:03.185402 | orchestrator | Tuesday 06 May 2025 00:44:03 +0000 (0:00:00.553) 0:01:06.302 *********** 2025-05-06 00:44:03.705320 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:44:03.705742 | orchestrator | 2025-05-06 00:44:03.707266 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-06 00:44:04.261172 | orchestrator | Tuesday 06 May 2025 00:44:03 +0000 (0:00:00.523) 0:01:06.825 *********** 2025-05-06 00:44:04.261307 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:44:04.261372 | orchestrator | 2025-05-06 00:44:04.262230 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-06 00:44:04.262622 | orchestrator | Tuesday 06 May 2025 00:44:04 +0000 (0:00:00.556) 0:01:07.381 *********** 2025-05-06 00:44:04.410304 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:44:04.414666 | orchestrator | 2025-05-06 00:44:04.415210 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-06 00:44:04.416622 | orchestrator | Tuesday 06 May 2025 00:44:04 +0000 (0:00:00.148) 0:01:07.529 *********** 2025-05-06 00:44:04.530509 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:04.530824 | orchestrator | 2025-05-06 00:44:04.532009 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-06 00:44:04.533024 | orchestrator | Tuesday 06 May 2025 00:44:04 +0000 (0:00:00.121) 0:01:07.651 *********** 2025-05-06 00:44:04.658079 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:04.658424 | orchestrator | 2025-05-06 00:44:04.659108 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-06 00:44:04.659845 | orchestrator | Tuesday 06 May 2025 00:44:04 +0000 (0:00:00.127) 0:01:07.779 *********** 2025-05-06 00:44:04.995127 | orchestrator | ok: [testbed-node-5] => { 2025-05-06 00:44:04.997817 | orchestrator |  "vgs_report": { 2025-05-06 00:44:05.000300 | orchestrator |  "vg": [] 2025-05-06 00:44:05.000663 | orchestrator |  } 2025-05-06 00:44:05.000688 | orchestrator | } 2025-05-06 00:44:05.000708 | orchestrator | 2025-05-06 00:44:05.001479 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-06 00:44:05.002169 | orchestrator | Tuesday 06 May 2025 00:44:04 +0000 (0:00:00.337) 0:01:08.116 *********** 2025-05-06 00:44:05.140239 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:05.140683 | orchestrator | 2025-05-06 00:44:05.141422 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-06 00:44:05.142121 | orchestrator | Tuesday 06 May 2025 00:44:05 +0000 (0:00:00.143) 0:01:08.259 *********** 2025-05-06 00:44:05.278348 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:05.278615 | orchestrator | 2025-05-06 00:44:05.280242 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-06 00:44:05.280573 | orchestrator | Tuesday 06 May 2025 00:44:05 +0000 (0:00:00.139) 0:01:08.399 *********** 2025-05-06 00:44:05.419954 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:05.421239 | orchestrator | 2025-05-06 00:44:05.422130 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-06 00:44:05.423435 | orchestrator | Tuesday 06 May 2025 00:44:05 +0000 (0:00:00.141) 0:01:08.540 *********** 2025-05-06 00:44:05.564610 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:05.565138 | orchestrator | 2025-05-06 00:44:05.565763 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-06 00:44:05.567472 | orchestrator | Tuesday 06 May 2025 00:44:05 +0000 (0:00:00.145) 0:01:08.685 *********** 2025-05-06 00:44:05.703500 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:05.703646 | orchestrator | 2025-05-06 00:44:05.704840 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-06 00:44:05.705946 | orchestrator | Tuesday 06 May 2025 00:44:05 +0000 (0:00:00.137) 0:01:08.823 *********** 2025-05-06 00:44:05.835360 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:05.837013 | orchestrator | 2025-05-06 00:44:05.837114 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-06 00:44:05.837984 | orchestrator | Tuesday 06 May 2025 00:44:05 +0000 (0:00:00.132) 0:01:08.955 *********** 2025-05-06 00:44:05.972433 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:05.973034 | orchestrator | 2025-05-06 00:44:05.974831 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-06 00:44:05.975776 | orchestrator | Tuesday 06 May 2025 00:44:05 +0000 (0:00:00.137) 0:01:09.093 *********** 2025-05-06 00:44:06.110952 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:06.111457 | orchestrator | 2025-05-06 00:44:06.112099 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-06 00:44:06.112790 | orchestrator | Tuesday 06 May 2025 00:44:06 +0000 (0:00:00.137) 0:01:09.231 *********** 2025-05-06 00:44:06.244464 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:06.245375 | orchestrator | 2025-05-06 00:44:06.246434 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-06 00:44:06.247510 | orchestrator | Tuesday 06 May 2025 00:44:06 +0000 (0:00:00.133) 0:01:09.365 *********** 2025-05-06 00:44:06.378727 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:06.379309 | orchestrator | 2025-05-06 00:44:06.380087 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-06 00:44:06.380800 | orchestrator | Tuesday 06 May 2025 00:44:06 +0000 (0:00:00.134) 0:01:09.500 *********** 2025-05-06 00:44:06.515493 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:06.515871 | orchestrator | 2025-05-06 00:44:06.517077 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-06 00:44:06.517707 | orchestrator | Tuesday 06 May 2025 00:44:06 +0000 (0:00:00.136) 0:01:09.636 *********** 2025-05-06 00:44:06.878604 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:06.880729 | orchestrator | 2025-05-06 00:44:06.881706 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-06 00:44:06.884753 | orchestrator | Tuesday 06 May 2025 00:44:06 +0000 (0:00:00.363) 0:01:10.000 *********** 2025-05-06 00:44:07.005745 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:07.006413 | orchestrator | 2025-05-06 00:44:07.007149 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-06 00:44:07.007988 | orchestrator | Tuesday 06 May 2025 00:44:07 +0000 (0:00:00.126) 0:01:10.127 *********** 2025-05-06 00:44:07.159147 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:07.323817 | orchestrator | 2025-05-06 00:44:07.324010 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-06 00:44:07.324034 | orchestrator | Tuesday 06 May 2025 00:44:07 +0000 (0:00:00.142) 0:01:10.269 *********** 2025-05-06 00:44:07.324114 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'})  2025-05-06 00:44:07.324210 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'})  2025-05-06 00:44:07.325504 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:07.326656 | orchestrator | 2025-05-06 00:44:07.328692 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-06 00:44:07.489497 | orchestrator | Tuesday 06 May 2025 00:44:07 +0000 (0:00:00.175) 0:01:10.445 *********** 2025-05-06 00:44:07.489652 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'})  2025-05-06 00:44:07.490573 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'})  2025-05-06 00:44:07.491593 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:07.493693 | orchestrator | 2025-05-06 00:44:07.663522 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-06 00:44:07.663646 | orchestrator | Tuesday 06 May 2025 00:44:07 +0000 (0:00:00.164) 0:01:10.609 *********** 2025-05-06 00:44:07.663681 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'})  2025-05-06 00:44:07.664133 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'})  2025-05-06 00:44:07.664978 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:07.665713 | orchestrator | 2025-05-06 00:44:07.666818 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-06 00:44:07.667308 | orchestrator | Tuesday 06 May 2025 00:44:07 +0000 (0:00:00.174) 0:01:10.784 *********** 2025-05-06 00:44:07.823192 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'})  2025-05-06 00:44:07.824157 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'})  2025-05-06 00:44:07.825294 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:07.826387 | orchestrator | 2025-05-06 00:44:07.828153 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-06 00:44:07.985131 | orchestrator | Tuesday 06 May 2025 00:44:07 +0000 (0:00:00.160) 0:01:10.944 *********** 2025-05-06 00:44:07.985280 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'})  2025-05-06 00:44:07.986985 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'})  2025-05-06 00:44:07.987037 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:07.987072 | orchestrator | 2025-05-06 00:44:08.162387 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-06 00:44:08.162553 | orchestrator | Tuesday 06 May 2025 00:44:07 +0000 (0:00:00.158) 0:01:11.103 *********** 2025-05-06 00:44:08.162605 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'})  2025-05-06 00:44:08.163241 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'})  2025-05-06 00:44:08.164210 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:08.165207 | orchestrator | 2025-05-06 00:44:08.166259 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-06 00:44:08.167148 | orchestrator | Tuesday 06 May 2025 00:44:08 +0000 (0:00:00.179) 0:01:11.282 *********** 2025-05-06 00:44:08.346398 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'})  2025-05-06 00:44:08.346587 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'})  2025-05-06 00:44:08.347687 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:08.348650 | orchestrator | 2025-05-06 00:44:08.349781 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-06 00:44:08.350470 | orchestrator | Tuesday 06 May 2025 00:44:08 +0000 (0:00:00.184) 0:01:11.466 *********** 2025-05-06 00:44:08.505292 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'})  2025-05-06 00:44:08.507570 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'})  2025-05-06 00:44:08.508496 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:08.509454 | orchestrator | 2025-05-06 00:44:08.510610 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-06 00:44:09.231372 | orchestrator | Tuesday 06 May 2025 00:44:08 +0000 (0:00:00.160) 0:01:11.627 *********** 2025-05-06 00:44:09.231508 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:44:09.232409 | orchestrator | 2025-05-06 00:44:09.233353 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-06 00:44:09.233966 | orchestrator | Tuesday 06 May 2025 00:44:09 +0000 (0:00:00.724) 0:01:12.351 *********** 2025-05-06 00:44:09.748569 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:44:09.748749 | orchestrator | 2025-05-06 00:44:09.748798 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-06 00:44:09.899873 | orchestrator | Tuesday 06 May 2025 00:44:09 +0000 (0:00:00.518) 0:01:12.870 *********** 2025-05-06 00:44:09.900078 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:44:09.900738 | orchestrator | 2025-05-06 00:44:09.901179 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-06 00:44:09.902294 | orchestrator | Tuesday 06 May 2025 00:44:09 +0000 (0:00:00.150) 0:01:13.021 *********** 2025-05-06 00:44:10.089557 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'vg_name': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'}) 2025-05-06 00:44:10.089725 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'vg_name': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'}) 2025-05-06 00:44:10.090853 | orchestrator | 2025-05-06 00:44:10.091997 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-06 00:44:10.092570 | orchestrator | Tuesday 06 May 2025 00:44:10 +0000 (0:00:00.188) 0:01:13.209 *********** 2025-05-06 00:44:10.258400 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'})  2025-05-06 00:44:10.258822 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'})  2025-05-06 00:44:10.260434 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:10.261814 | orchestrator | 2025-05-06 00:44:10.262391 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-06 00:44:10.263452 | orchestrator | Tuesday 06 May 2025 00:44:10 +0000 (0:00:00.167) 0:01:13.377 *********** 2025-05-06 00:44:10.420089 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'})  2025-05-06 00:44:10.420345 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'})  2025-05-06 00:44:10.421193 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:10.421530 | orchestrator | 2025-05-06 00:44:10.421778 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-06 00:44:10.422411 | orchestrator | Tuesday 06 May 2025 00:44:10 +0000 (0:00:00.164) 0:01:13.541 *********** 2025-05-06 00:44:10.604689 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'})  2025-05-06 00:44:10.605103 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'})  2025-05-06 00:44:10.605142 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:10.605663 | orchestrator | 2025-05-06 00:44:10.606596 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-06 00:44:10.608826 | orchestrator | Tuesday 06 May 2025 00:44:10 +0000 (0:00:00.183) 0:01:13.725 *********** 2025-05-06 00:44:11.178306 | orchestrator | ok: [testbed-node-5] => { 2025-05-06 00:44:11.179090 | orchestrator |  "lvm_report": { 2025-05-06 00:44:11.180769 | orchestrator |  "lv": [ 2025-05-06 00:44:11.181331 | orchestrator |  { 2025-05-06 00:44:11.182531 | orchestrator |  "lv_name": "osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5", 2025-05-06 00:44:11.183629 | orchestrator |  "vg_name": "ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5" 2025-05-06 00:44:11.184949 | orchestrator |  }, 2025-05-06 00:44:11.185770 | orchestrator |  { 2025-05-06 00:44:11.186527 | orchestrator |  "lv_name": "osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9", 2025-05-06 00:44:11.187492 | orchestrator |  "vg_name": "ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9" 2025-05-06 00:44:11.188229 | orchestrator |  } 2025-05-06 00:44:11.188928 | orchestrator |  ], 2025-05-06 00:44:11.189670 | orchestrator |  "pv": [ 2025-05-06 00:44:11.190355 | orchestrator |  { 2025-05-06 00:44:11.190848 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-06 00:44:11.191359 | orchestrator |  "vg_name": "ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9" 2025-05-06 00:44:11.192186 | orchestrator |  }, 2025-05-06 00:44:11.192983 | orchestrator |  { 2025-05-06 00:44:11.193763 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-06 00:44:11.194290 | orchestrator |  "vg_name": "ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5" 2025-05-06 00:44:11.195508 | orchestrator |  } 2025-05-06 00:44:11.195614 | orchestrator |  ] 2025-05-06 00:44:11.196411 | orchestrator |  } 2025-05-06 00:44:11.197105 | orchestrator | } 2025-05-06 00:44:11.197632 | orchestrator | 2025-05-06 00:44:11.198286 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:44:11.198563 | orchestrator | 2025-05-06 00:44:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-06 00:44:11.198654 | orchestrator | 2025-05-06 00:44:11 | INFO  | Please wait and do not abort execution. 2025-05-06 00:44:11.199362 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-06 00:44:11.199878 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-06 00:44:11.200247 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-06 00:44:11.200707 | orchestrator | 2025-05-06 00:44:11.201134 | orchestrator | 2025-05-06 00:44:11.201987 | orchestrator | 2025-05-06 00:44:11.202129 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 00:44:11.202521 | orchestrator | Tuesday 06 May 2025 00:44:11 +0000 (0:00:00.573) 0:01:14.299 *********** 2025-05-06 00:44:11.203291 | orchestrator | =============================================================================== 2025-05-06 00:44:11.203570 | orchestrator | Create block VGs -------------------------------------------------------- 5.76s 2025-05-06 00:44:11.204068 | orchestrator | Create block LVs -------------------------------------------------------- 4.22s 2025-05-06 00:44:11.204481 | orchestrator | Print LVM report data --------------------------------------------------- 2.16s 2025-05-06 00:44:11.204845 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.88s 2025-05-06 00:44:11.205243 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.79s 2025-05-06 00:44:11.205733 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.62s 2025-05-06 00:44:11.206077 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.61s 2025-05-06 00:44:11.206506 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.58s 2025-05-06 00:44:11.206989 | orchestrator | Add known links to the list of available block devices ------------------ 1.45s 2025-05-06 00:44:11.207345 | orchestrator | Add known partitions to the list of available block devices ------------- 1.37s 2025-05-06 00:44:11.207871 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.05s 2025-05-06 00:44:11.208529 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2025-05-06 00:44:11.208690 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.72s 2025-05-06 00:44:11.209343 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.71s 2025-05-06 00:44:11.209695 | orchestrator | Fail if number of OSDs exceeds num_osds for a WAL VG -------------------- 0.70s 2025-05-06 00:44:11.210100 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.69s 2025-05-06 00:44:11.210613 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2025-05-06 00:44:11.210964 | orchestrator | Get initial list of available block devices ----------------------------- 0.67s 2025-05-06 00:44:11.211535 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.67s 2025-05-06 00:44:11.211669 | orchestrator | Print number of OSDs wanted per WAL VG ---------------------------------- 0.66s 2025-05-06 00:44:13.037558 | orchestrator | 2025-05-06 00:44:13 | INFO  | Task 08d12b6e-9c9a-4a9c-8469-9ce9704cc167 (facts) was prepared for execution. 2025-05-06 00:44:16.144784 | orchestrator | 2025-05-06 00:44:13 | INFO  | It takes a moment until task 08d12b6e-9c9a-4a9c-8469-9ce9704cc167 (facts) has been started and output is visible here. 2025-05-06 00:44:16.145014 | orchestrator | 2025-05-06 00:44:16.145581 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-06 00:44:16.149182 | orchestrator | 2025-05-06 00:44:16.149917 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-06 00:44:16.150531 | orchestrator | Tuesday 06 May 2025 00:44:16 +0000 (0:00:00.193) 0:00:00.193 *********** 2025-05-06 00:44:17.163270 | orchestrator | ok: [testbed-manager] 2025-05-06 00:44:17.164560 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:44:17.165990 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:44:17.167199 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:44:17.168178 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:44:17.168628 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:44:17.169405 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:44:17.170059 | orchestrator | 2025-05-06 00:44:17.170837 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-06 00:44:17.171502 | orchestrator | Tuesday 06 May 2025 00:44:17 +0000 (0:00:01.016) 0:00:01.210 *********** 2025-05-06 00:44:17.319633 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:44:17.399861 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:44:17.479344 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:44:17.557533 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:44:17.661959 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:44:18.398712 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:44:18.399957 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:18.402381 | orchestrator | 2025-05-06 00:44:18.403201 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-06 00:44:18.403230 | orchestrator | 2025-05-06 00:44:18.403254 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-06 00:44:18.405647 | orchestrator | Tuesday 06 May 2025 00:44:18 +0000 (0:00:01.239) 0:00:02.449 *********** 2025-05-06 00:44:23.115961 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:44:23.117082 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:44:23.117134 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:44:23.118146 | orchestrator | ok: [testbed-manager] 2025-05-06 00:44:23.118609 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:44:23.119287 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:44:23.120364 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:44:23.120524 | orchestrator | 2025-05-06 00:44:23.121532 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-06 00:44:23.122055 | orchestrator | 2025-05-06 00:44:23.123062 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-06 00:44:23.123799 | orchestrator | Tuesday 06 May 2025 00:44:23 +0000 (0:00:04.718) 0:00:07.167 *********** 2025-05-06 00:44:23.486963 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:44:23.563391 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:44:23.642998 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:44:23.724159 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:44:23.801338 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:44:23.843047 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:44:23.843189 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:44:23.843950 | orchestrator | 2025-05-06 00:44:23.844846 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:44:23.845702 | orchestrator | 2025-05-06 00:44:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-06 00:44:23.846117 | orchestrator | 2025-05-06 00:44:23 | INFO  | Please wait and do not abort execution. 2025-05-06 00:44:23.846153 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:44:23.846962 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:44:23.847800 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:44:23.848488 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:44:23.849290 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:44:23.849595 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:44:23.850389 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:44:23.851176 | orchestrator | 2025-05-06 00:44:23.852080 | orchestrator | Tuesday 06 May 2025 00:44:23 +0000 (0:00:00.728) 0:00:07.895 *********** 2025-05-06 00:44:23.852808 | orchestrator | =============================================================================== 2025-05-06 00:44:23.853506 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.72s 2025-05-06 00:44:23.854233 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2025-05-06 00:44:23.854528 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.02s 2025-05-06 00:44:23.855418 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.73s 2025-05-06 00:44:24.398715 | orchestrator | 2025-05-06 00:44:24.401078 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Tue May 6 00:44:24 UTC 2025 2025-05-06 00:44:25.799908 | orchestrator | 2025-05-06 00:44:25.800052 | orchestrator | 2025-05-06 00:44:25 | INFO  | Collection nutshell is prepared for execution 2025-05-06 00:44:25.804072 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [0] - dotfiles 2025-05-06 00:44:25.804119 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [0] - homer 2025-05-06 00:44:25.804337 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [0] - netdata 2025-05-06 00:44:25.804375 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [0] - openstackclient 2025-05-06 00:44:25.804409 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [0] - phpmyadmin 2025-05-06 00:44:25.805530 | orchestrator | 2025-05-06 00:44:25 | INFO  | A [0] - common 2025-05-06 00:44:25.805562 | orchestrator | 2025-05-06 00:44:25 | INFO  | A [1] -- loadbalancer 2025-05-06 00:44:25.806268 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [2] --- opensearch 2025-05-06 00:44:25.806393 | orchestrator | 2025-05-06 00:44:25 | INFO  | A [2] --- mariadb-ng 2025-05-06 00:44:25.806491 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [3] ---- horizon 2025-05-06 00:44:25.806511 | orchestrator | 2025-05-06 00:44:25 | INFO  | A [3] ---- keystone 2025-05-06 00:44:25.806525 | orchestrator | 2025-05-06 00:44:25 | INFO  | A [4] ----- neutron 2025-05-06 00:44:25.806540 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [5] ------ wait-for-nova 2025-05-06 00:44:25.806561 | orchestrator | 2025-05-06 00:44:25 | INFO  | A [5] ------ octavia 2025-05-06 00:44:25.808328 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [4] ----- barbican 2025-05-06 00:44:25.808354 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [4] ----- designate 2025-05-06 00:44:25.808376 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [4] ----- ironic 2025-05-06 00:44:25.808726 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [4] ----- placement 2025-05-06 00:44:25.808765 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [4] ----- magnum 2025-05-06 00:44:25.808780 | orchestrator | 2025-05-06 00:44:25 | INFO  | A [1] -- openvswitch 2025-05-06 00:44:25.808794 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [2] --- ovn 2025-05-06 00:44:25.808809 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [1] -- memcached 2025-05-06 00:44:25.808823 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [1] -- redis 2025-05-06 00:44:25.808838 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [1] -- rabbitmq-ng 2025-05-06 00:44:25.808852 | orchestrator | 2025-05-06 00:44:25 | INFO  | A [0] - kubernetes 2025-05-06 00:44:25.808897 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [1] -- kubeconfig 2025-05-06 00:44:25.808912 | orchestrator | 2025-05-06 00:44:25 | INFO  | A [1] -- copy-kubeconfig 2025-05-06 00:44:25.808926 | orchestrator | 2025-05-06 00:44:25 | INFO  | A [0] - ceph 2025-05-06 00:44:25.808945 | orchestrator | 2025-05-06 00:44:25 | INFO  | A [1] -- ceph-pools 2025-05-06 00:44:25.809002 | orchestrator | 2025-05-06 00:44:25 | INFO  | A [2] --- copy-ceph-keys 2025-05-06 00:44:25.809023 | orchestrator | 2025-05-06 00:44:25 | INFO  | A [3] ---- cephclient 2025-05-06 00:44:25.809218 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-06 00:44:25.809326 | orchestrator | 2025-05-06 00:44:25 | INFO  | A [4] ----- wait-for-keystone 2025-05-06 00:44:25.809347 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-06 00:44:25.809388 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [5] ------ glance 2025-05-06 00:44:25.809403 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [5] ------ cinder 2025-05-06 00:44:25.809421 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [5] ------ nova 2025-05-06 00:44:25.809492 | orchestrator | 2025-05-06 00:44:25 | INFO  | A [4] ----- prometheus 2025-05-06 00:44:25.927395 | orchestrator | 2025-05-06 00:44:25 | INFO  | D [5] ------ grafana 2025-05-06 00:44:25.927528 | orchestrator | 2025-05-06 00:44:25 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-06 00:44:27.847065 | orchestrator | 2025-05-06 00:44:25 | INFO  | Tasks are running in the background 2025-05-06 00:44:27.847206 | orchestrator | 2025-05-06 00:44:27 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-06 00:44:29.946225 | orchestrator | 2025-05-06 00:44:29 | INFO  | Task e2c22f9a-a223-4047-86a8-56204ad3b0fd is in state STARTED 2025-05-06 00:44:29.946411 | orchestrator | 2025-05-06 00:44:29 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:44:29.947005 | orchestrator | 2025-05-06 00:44:29 | INFO  | Task 8eba389d-b5ce-4387-8080-73a9c7270126 is in state STARTED 2025-05-06 00:44:29.947481 | orchestrator | 2025-05-06 00:44:29 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:44:29.947903 | orchestrator | 2025-05-06 00:44:29 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:44:29.948561 | orchestrator | 2025-05-06 00:44:29 | INFO  | Task 4582d9ed-280a-4b56-a807-11ddb449f8f3 is in state STARTED 2025-05-06 00:44:32.997192 | orchestrator | 2025-05-06 00:44:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:44:32.997331 | orchestrator | 2025-05-06 00:44:32 | INFO  | Task e2c22f9a-a223-4047-86a8-56204ad3b0fd is in state STARTED 2025-05-06 00:44:32.997491 | orchestrator | 2025-05-06 00:44:32 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:44:32.998162 | orchestrator | 2025-05-06 00:44:32 | INFO  | Task 8eba389d-b5ce-4387-8080-73a9c7270126 is in state STARTED 2025-05-06 00:44:32.998581 | orchestrator | 2025-05-06 00:44:32 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:44:32.999088 | orchestrator | 2025-05-06 00:44:32 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:44:32.999693 | orchestrator | 2025-05-06 00:44:32 | INFO  | Task 4582d9ed-280a-4b56-a807-11ddb449f8f3 is in state STARTED 2025-05-06 00:44:36.046670 | orchestrator | 2025-05-06 00:44:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:44:36.046800 | orchestrator | 2025-05-06 00:44:36 | INFO  | Task e2c22f9a-a223-4047-86a8-56204ad3b0fd is in state STARTED 2025-05-06 00:44:36.046946 | orchestrator | 2025-05-06 00:44:36 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:44:36.048101 | orchestrator | 2025-05-06 00:44:36 | INFO  | Task 8eba389d-b5ce-4387-8080-73a9c7270126 is in state STARTED 2025-05-06 00:44:36.052019 | orchestrator | 2025-05-06 00:44:36 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:44:36.052398 | orchestrator | 2025-05-06 00:44:36 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:44:36.053035 | orchestrator | 2025-05-06 00:44:36 | INFO  | Task 4582d9ed-280a-4b56-a807-11ddb449f8f3 is in state STARTED 2025-05-06 00:44:36.053243 | orchestrator | 2025-05-06 00:44:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:44:39.140116 | orchestrator | 2025-05-06 00:44:39 | INFO  | Task e2c22f9a-a223-4047-86a8-56204ad3b0fd is in state STARTED 2025-05-06 00:44:39.146143 | orchestrator | 2025-05-06 00:44:39 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:44:39.151955 | orchestrator | 2025-05-06 00:44:39 | INFO  | Task 8eba389d-b5ce-4387-8080-73a9c7270126 is in state STARTED 2025-05-06 00:44:39.155369 | orchestrator | 2025-05-06 00:44:39 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:44:39.158538 | orchestrator | 2025-05-06 00:44:39 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:44:39.163105 | orchestrator | 2025-05-06 00:44:39 | INFO  | Task 4582d9ed-280a-4b56-a807-11ddb449f8f3 is in state STARTED 2025-05-06 00:44:42.214353 | orchestrator | 2025-05-06 00:44:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:44:42.214476 | orchestrator | 2025-05-06 00:44:42 | INFO  | Task e2c22f9a-a223-4047-86a8-56204ad3b0fd is in state STARTED 2025-05-06 00:44:42.217612 | orchestrator | 2025-05-06 00:44:42 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:44:42.223427 | orchestrator | 2025-05-06 00:44:42 | INFO  | Task 8eba389d-b5ce-4387-8080-73a9c7270126 is in state STARTED 2025-05-06 00:44:42.223995 | orchestrator | 2025-05-06 00:44:42 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:44:42.224489 | orchestrator | 2025-05-06 00:44:42 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:44:42.224934 | orchestrator | 2025-05-06 00:44:42 | INFO  | Task 4582d9ed-280a-4b56-a807-11ddb449f8f3 is in state STARTED 2025-05-06 00:44:42.225009 | orchestrator | 2025-05-06 00:44:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:44:45.272610 | orchestrator | 2025-05-06 00:44:45 | INFO  | Task e2c22f9a-a223-4047-86a8-56204ad3b0fd is in state STARTED 2025-05-06 00:44:45.274475 | orchestrator | 2025-05-06 00:44:45.274564 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-06 00:44:45.274584 | orchestrator | 2025-05-06 00:44:45.274600 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-06 00:44:45.274615 | orchestrator | Tuesday 06 May 2025 00:44:31 +0000 (0:00:00.419) 0:00:00.419 *********** 2025-05-06 00:44:45.274629 | orchestrator | changed: [testbed-manager] 2025-05-06 00:44:45.274644 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:44:45.274658 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:44:45.274672 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:44:45.274686 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:44:45.274699 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:44:45.274713 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:44:45.274726 | orchestrator | 2025-05-06 00:44:45.274740 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-06 00:44:45.274807 | orchestrator | Tuesday 06 May 2025 00:44:35 +0000 (0:00:03.603) 0:00:04.023 *********** 2025-05-06 00:44:45.274823 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-06 00:44:45.274871 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-06 00:44:45.274893 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-06 00:44:45.274908 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-06 00:44:45.274922 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-06 00:44:45.274936 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-06 00:44:45.274949 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-06 00:44:45.274963 | orchestrator | 2025-05-06 00:44:45.274977 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-06 00:44:45.274991 | orchestrator | Tuesday 06 May 2025 00:44:37 +0000 (0:00:02.429) 0:00:06.452 *********** 2025-05-06 00:44:45.275031 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-06 00:44:36.157960', 'end': '2025-05-06 00:44:36.166633', 'delta': '0:00:00.008673', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-06 00:44:45.275058 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-06 00:44:36.000329', 'end': '2025-05-06 00:44:36.009522', 'delta': '0:00:00.009193', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-06 00:44:45.275076 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-06 00:44:36.026253', 'end': '2025-05-06 00:44:36.030414', 'delta': '0:00:00.004161', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-06 00:44:45.275120 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-06 00:44:36.373308', 'end': '2025-05-06 00:44:36.381976', 'delta': '0:00:00.008668', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-06 00:44:45.275137 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-06 00:44:36.607977', 'end': '2025-05-06 00:44:36.617143', 'delta': '0:00:00.009166', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-06 00:44:45.275161 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-06 00:44:36.967979', 'end': '2025-05-06 00:44:36.979972', 'delta': '0:00:00.011993', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-06 00:44:45.275182 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-06 00:44:37.553757', 'end': '2025-05-06 00:44:37.564087', 'delta': '0:00:00.010330', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-06 00:44:45.275198 | orchestrator | 2025-05-06 00:44:45.275214 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-06 00:44:45.275230 | orchestrator | Tuesday 06 May 2025 00:44:40 +0000 (0:00:02.411) 0:00:08.864 *********** 2025-05-06 00:44:45.275245 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-06 00:44:45.275262 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-06 00:44:45.275278 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-06 00:44:45.275294 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-06 00:44:45.275309 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-06 00:44:45.275323 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-06 00:44:45.275338 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-06 00:44:45.275362 | orchestrator | 2025-05-06 00:44:45.275386 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:44:45.275410 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:44:45.275436 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:44:45.275451 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:44:45.275473 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:44:45.275516 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:44:45.275531 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:44:45.275545 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:44:45.275568 | orchestrator | 2025-05-06 00:44:45.275582 | orchestrator | Tuesday 06 May 2025 00:44:42 +0000 (0:00:02.284) 0:00:11.148 *********** 2025-05-06 00:44:45.275595 | orchestrator | =============================================================================== 2025-05-06 00:44:45.275609 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.60s 2025-05-06 00:44:45.275623 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.43s 2025-05-06 00:44:45.275637 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.41s 2025-05-06 00:44:45.275651 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.28s 2025-05-06 00:44:45.275669 | orchestrator | 2025-05-06 00:44:45 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:44:45.275752 | orchestrator | 2025-05-06 00:44:45 | INFO  | Task 8eba389d-b5ce-4387-8080-73a9c7270126 is in state SUCCESS 2025-05-06 00:44:45.275774 | orchestrator | 2025-05-06 00:44:45 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:44:45.276643 | orchestrator | 2025-05-06 00:44:45 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:44:45.276954 | orchestrator | 2025-05-06 00:44:45 | INFO  | Task 4582d9ed-280a-4b56-a807-11ddb449f8f3 is in state STARTED 2025-05-06 00:44:45.281247 | orchestrator | 2025-05-06 00:44:45 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:44:45.282171 | orchestrator | 2025-05-06 00:44:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:44:48.343526 | orchestrator | 2025-05-06 00:44:48 | INFO  | Task e2c22f9a-a223-4047-86a8-56204ad3b0fd is in state STARTED 2025-05-06 00:44:48.344973 | orchestrator | 2025-05-06 00:44:48 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:44:48.353372 | orchestrator | 2025-05-06 00:44:48 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:44:48.353479 | orchestrator | 2025-05-06 00:44:48 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:44:51.416795 | orchestrator | 2025-05-06 00:44:48 | INFO  | Task 4582d9ed-280a-4b56-a807-11ddb449f8f3 is in state STARTED 2025-05-06 00:44:51.416956 | orchestrator | 2025-05-06 00:44:48 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:44:51.416976 | orchestrator | 2025-05-06 00:44:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:44:51.417006 | orchestrator | 2025-05-06 00:44:51 | INFO  | Task e2c22f9a-a223-4047-86a8-56204ad3b0fd is in state STARTED 2025-05-06 00:44:51.418518 | orchestrator | 2025-05-06 00:44:51 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:44:51.423033 | orchestrator | 2025-05-06 00:44:51 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:44:51.424652 | orchestrator | 2025-05-06 00:44:51 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:44:51.424686 | orchestrator | 2025-05-06 00:44:51 | INFO  | Task 4582d9ed-280a-4b56-a807-11ddb449f8f3 is in state STARTED 2025-05-06 00:44:51.426629 | orchestrator | 2025-05-06 00:44:51 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:44:51.428432 | orchestrator | 2025-05-06 00:44:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:44:54.494569 | orchestrator | 2025-05-06 00:44:54 | INFO  | Task e2c22f9a-a223-4047-86a8-56204ad3b0fd is in state STARTED 2025-05-06 00:44:54.500410 | orchestrator | 2025-05-06 00:44:54 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:44:54.500519 | orchestrator | 2025-05-06 00:44:54 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:44:54.503747 | orchestrator | 2025-05-06 00:44:54 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:44:54.507515 | orchestrator | 2025-05-06 00:44:54 | INFO  | Task 4582d9ed-280a-4b56-a807-11ddb449f8f3 is in state STARTED 2025-05-06 00:44:57.565546 | orchestrator | 2025-05-06 00:44:54 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:44:57.565648 | orchestrator | 2025-05-06 00:44:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:44:57.565685 | orchestrator | 2025-05-06 00:44:57 | INFO  | Task e2c22f9a-a223-4047-86a8-56204ad3b0fd is in state STARTED 2025-05-06 00:44:57.567393 | orchestrator | 2025-05-06 00:44:57 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:44:57.569396 | orchestrator | 2025-05-06 00:44:57 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:44:57.570982 | orchestrator | 2025-05-06 00:44:57 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:44:57.571103 | orchestrator | 2025-05-06 00:44:57 | INFO  | Task 4582d9ed-280a-4b56-a807-11ddb449f8f3 is in state STARTED 2025-05-06 00:44:57.573012 | orchestrator | 2025-05-06 00:44:57 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:45:00.627403 | orchestrator | 2025-05-06 00:44:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:45:00.627541 | orchestrator | 2025-05-06 00:45:00 | INFO  | Task e2c22f9a-a223-4047-86a8-56204ad3b0fd is in state STARTED 2025-05-06 00:45:00.628153 | orchestrator | 2025-05-06 00:45:00 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:45:00.628888 | orchestrator | 2025-05-06 00:45:00 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:45:00.630112 | orchestrator | 2025-05-06 00:45:00 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:45:00.630475 | orchestrator | 2025-05-06 00:45:00 | INFO  | Task 4582d9ed-280a-4b56-a807-11ddb449f8f3 is in state STARTED 2025-05-06 00:45:00.631413 | orchestrator | 2025-05-06 00:45:00 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:45:03.676377 | orchestrator | 2025-05-06 00:45:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:45:03.676549 | orchestrator | 2025-05-06 00:45:03 | INFO  | Task e2c22f9a-a223-4047-86a8-56204ad3b0fd is in state STARTED 2025-05-06 00:45:03.676916 | orchestrator | 2025-05-06 00:45:03 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:45:03.678927 | orchestrator | 2025-05-06 00:45:03 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:45:03.681193 | orchestrator | 2025-05-06 00:45:03 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:45:03.682314 | orchestrator | 2025-05-06 00:45:03 | INFO  | Task 4582d9ed-280a-4b56-a807-11ddb449f8f3 is in state STARTED 2025-05-06 00:45:03.684246 | orchestrator | 2025-05-06 00:45:03 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:45:06.755372 | orchestrator | 2025-05-06 00:45:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:45:06.755520 | orchestrator | 2025-05-06 00:45:06 | INFO  | Task e2c22f9a-a223-4047-86a8-56204ad3b0fd is in state STARTED 2025-05-06 00:45:06.757170 | orchestrator | 2025-05-06 00:45:06 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:45:06.757235 | orchestrator | 2025-05-06 00:45:06 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:45:06.757258 | orchestrator | 2025-05-06 00:45:06 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:45:06.761342 | orchestrator | 2025-05-06 00:45:06 | INFO  | Task 4582d9ed-280a-4b56-a807-11ddb449f8f3 is in state STARTED 2025-05-06 00:45:06.763217 | orchestrator | 2025-05-06 00:45:06 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:45:09.837214 | orchestrator | 2025-05-06 00:45:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:45:09.837367 | orchestrator | 2025-05-06 00:45:09 | INFO  | Task e2c22f9a-a223-4047-86a8-56204ad3b0fd is in state STARTED 2025-05-06 00:45:09.839785 | orchestrator | 2025-05-06 00:45:09 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:45:09.839860 | orchestrator | 2025-05-06 00:45:09 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:45:09.841057 | orchestrator | 2025-05-06 00:45:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:45:09.847115 | orchestrator | 2025-05-06 00:45:09 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:45:09.847725 | orchestrator | 2025-05-06 00:45:09 | INFO  | Task 4582d9ed-280a-4b56-a807-11ddb449f8f3 is in state SUCCESS 2025-05-06 00:45:09.850012 | orchestrator | 2025-05-06 00:45:09 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:45:09.852605 | orchestrator | 2025-05-06 00:45:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:45:12.909031 | orchestrator | 2025-05-06 00:45:12 | INFO  | Task e2c22f9a-a223-4047-86a8-56204ad3b0fd is in state STARTED 2025-05-06 00:45:12.914729 | orchestrator | 2025-05-06 00:45:12 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:45:12.918980 | orchestrator | 2025-05-06 00:45:12 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:45:12.921952 | orchestrator | 2025-05-06 00:45:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:45:12.927076 | orchestrator | 2025-05-06 00:45:12 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:45:12.929597 | orchestrator | 2025-05-06 00:45:12 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:45:16.016764 | orchestrator | 2025-05-06 00:45:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:45:16.016926 | orchestrator | 2025-05-06 00:45:16 | INFO  | Task e2c22f9a-a223-4047-86a8-56204ad3b0fd is in state STARTED 2025-05-06 00:45:16.019289 | orchestrator | 2025-05-06 00:45:16 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:45:16.020278 | orchestrator | 2025-05-06 00:45:16 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:45:16.021290 | orchestrator | 2025-05-06 00:45:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:45:16.021375 | orchestrator | 2025-05-06 00:45:16 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:45:16.023508 | orchestrator | 2025-05-06 00:45:16 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:45:19.104355 | orchestrator | 2025-05-06 00:45:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:45:19.104455 | orchestrator | 2025-05-06 00:45:19 | INFO  | Task e2c22f9a-a223-4047-86a8-56204ad3b0fd is in state STARTED 2025-05-06 00:45:19.105654 | orchestrator | 2025-05-06 00:45:19 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:45:19.106429 | orchestrator | 2025-05-06 00:45:19 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:45:19.107205 | orchestrator | 2025-05-06 00:45:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:45:19.110155 | orchestrator | 2025-05-06 00:45:19 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:45:22.149031 | orchestrator | 2025-05-06 00:45:19 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:45:22.149137 | orchestrator | 2025-05-06 00:45:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:45:22.149173 | orchestrator | 2025-05-06 00:45:22 | INFO  | Task e2c22f9a-a223-4047-86a8-56204ad3b0fd is in state STARTED 2025-05-06 00:45:22.153774 | orchestrator | 2025-05-06 00:45:22 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:45:22.155221 | orchestrator | 2025-05-06 00:45:22 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:45:22.158324 | orchestrator | 2025-05-06 00:45:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:45:22.159027 | orchestrator | 2025-05-06 00:45:22 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:45:22.160410 | orchestrator | 2025-05-06 00:45:22 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:45:22.160548 | orchestrator | 2025-05-06 00:45:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:45:25.228208 | orchestrator | 2025-05-06 00:45:25 | INFO  | Task e2c22f9a-a223-4047-86a8-56204ad3b0fd is in state STARTED 2025-05-06 00:45:25.228384 | orchestrator | 2025-05-06 00:45:25 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:45:25.229403 | orchestrator | 2025-05-06 00:45:25 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:45:25.229933 | orchestrator | 2025-05-06 00:45:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:45:25.230572 | orchestrator | 2025-05-06 00:45:25 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:45:25.232354 | orchestrator | 2025-05-06 00:45:25 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:45:28.273252 | orchestrator | 2025-05-06 00:45:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:45:28.273335 | orchestrator | 2025-05-06 00:45:28 | INFO  | Task e2c22f9a-a223-4047-86a8-56204ad3b0fd is in state SUCCESS 2025-05-06 00:45:28.273712 | orchestrator | 2025-05-06 00:45:28 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:45:28.275680 | orchestrator | 2025-05-06 00:45:28 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:45:28.276387 | orchestrator | 2025-05-06 00:45:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:45:28.278830 | orchestrator | 2025-05-06 00:45:28 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:45:28.279390 | orchestrator | 2025-05-06 00:45:28 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:45:28.279600 | orchestrator | 2025-05-06 00:45:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:45:31.318093 | orchestrator | 2025-05-06 00:45:31 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:45:31.320818 | orchestrator | 2025-05-06 00:45:31 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:45:31.322105 | orchestrator | 2025-05-06 00:45:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:45:31.329319 | orchestrator | 2025-05-06 00:45:31 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:45:31.333269 | orchestrator | 2025-05-06 00:45:31 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:45:34.375748 | orchestrator | 2025-05-06 00:45:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:45:34.375922 | orchestrator | 2025-05-06 00:45:34 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:45:34.376339 | orchestrator | 2025-05-06 00:45:34 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:45:34.376871 | orchestrator | 2025-05-06 00:45:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:45:34.377492 | orchestrator | 2025-05-06 00:45:34 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:45:34.378013 | orchestrator | 2025-05-06 00:45:34 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:45:37.407925 | orchestrator | 2025-05-06 00:45:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:45:37.408146 | orchestrator | 2025-05-06 00:45:37 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state STARTED 2025-05-06 00:45:37.408259 | orchestrator | 2025-05-06 00:45:37 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:45:37.408674 | orchestrator | 2025-05-06 00:45:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:45:37.409138 | orchestrator | 2025-05-06 00:45:37 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:45:37.409594 | orchestrator | 2025-05-06 00:45:37 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:45:37.410516 | orchestrator | 2025-05-06 00:45:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:45:40.454248 | orchestrator | 2025-05-06 00:45:40.454405 | orchestrator | 2025-05-06 00:45:40.454426 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-06 00:45:40.454442 | orchestrator | 2025-05-06 00:45:40.454457 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-06 00:45:40.454471 | orchestrator | Tuesday 06 May 2025 00:44:33 +0000 (0:00:00.269) 0:00:00.269 *********** 2025-05-06 00:45:40.454485 | orchestrator | ok: [testbed-manager] => { 2025-05-06 00:45:40.454501 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-06 00:45:40.454516 | orchestrator | } 2025-05-06 00:45:40.454530 | orchestrator | 2025-05-06 00:45:40.454544 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-06 00:45:40.454558 | orchestrator | Tuesday 06 May 2025 00:44:33 +0000 (0:00:00.106) 0:00:00.376 *********** 2025-05-06 00:45:40.454572 | orchestrator | ok: [testbed-manager] 2025-05-06 00:45:40.454627 | orchestrator | 2025-05-06 00:45:40.454644 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-06 00:45:40.454658 | orchestrator | Tuesday 06 May 2025 00:44:34 +0000 (0:00:00.967) 0:00:01.343 *********** 2025-05-06 00:45:40.454672 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-06 00:45:40.454685 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-06 00:45:40.454700 | orchestrator | 2025-05-06 00:45:40.454714 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-06 00:45:40.454727 | orchestrator | Tuesday 06 May 2025 00:44:36 +0000 (0:00:01.296) 0:00:02.640 *********** 2025-05-06 00:45:40.454760 | orchestrator | changed: [testbed-manager] 2025-05-06 00:45:40.454804 | orchestrator | 2025-05-06 00:45:40.454822 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-06 00:45:40.454838 | orchestrator | Tuesday 06 May 2025 00:44:38 +0000 (0:00:02.469) 0:00:05.110 *********** 2025-05-06 00:45:40.454854 | orchestrator | changed: [testbed-manager] 2025-05-06 00:45:40.454870 | orchestrator | 2025-05-06 00:45:40.454901 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-06 00:45:40.454918 | orchestrator | Tuesday 06 May 2025 00:44:40 +0000 (0:00:01.520) 0:00:06.631 *********** 2025-05-06 00:45:40.454935 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-06 00:45:40.454950 | orchestrator | ok: [testbed-manager] 2025-05-06 00:45:40.454966 | orchestrator | 2025-05-06 00:45:40.454981 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-06 00:45:40.454995 | orchestrator | Tuesday 06 May 2025 00:45:04 +0000 (0:00:24.507) 0:00:31.139 *********** 2025-05-06 00:45:40.455009 | orchestrator | changed: [testbed-manager] 2025-05-06 00:45:40.455064 | orchestrator | 2025-05-06 00:45:40.455082 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:45:40.455097 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:45:40.455126 | orchestrator | 2025-05-06 00:45:40.455141 | orchestrator | Tuesday 06 May 2025 00:45:07 +0000 (0:00:02.475) 0:00:33.614 *********** 2025-05-06 00:45:40.455155 | orchestrator | =============================================================================== 2025-05-06 00:45:40.455169 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.51s 2025-05-06 00:45:40.455213 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.48s 2025-05-06 00:45:40.455228 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.47s 2025-05-06 00:45:40.455248 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.52s 2025-05-06 00:45:40.455263 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.30s 2025-05-06 00:45:40.455277 | orchestrator | osism.services.homer : Create traefik external network ------------------ 0.97s 2025-05-06 00:45:40.455291 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.11s 2025-05-06 00:45:40.455304 | orchestrator | 2025-05-06 00:45:40.455318 | orchestrator | 2025-05-06 00:45:40.455332 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-06 00:45:40.455346 | orchestrator | 2025-05-06 00:45:40.455360 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-06 00:45:40.455373 | orchestrator | Tuesday 06 May 2025 00:44:33 +0000 (0:00:00.168) 0:00:00.168 *********** 2025-05-06 00:45:40.455388 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-06 00:45:40.455402 | orchestrator | 2025-05-06 00:45:40.455416 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-06 00:45:40.455430 | orchestrator | Tuesday 06 May 2025 00:44:33 +0000 (0:00:00.301) 0:00:00.470 *********** 2025-05-06 00:45:40.455444 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-06 00:45:40.455458 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-06 00:45:40.455472 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-06 00:45:40.455485 | orchestrator | 2025-05-06 00:45:40.455499 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-06 00:45:40.455513 | orchestrator | Tuesday 06 May 2025 00:44:35 +0000 (0:00:01.225) 0:00:01.695 *********** 2025-05-06 00:45:40.455527 | orchestrator | changed: [testbed-manager] 2025-05-06 00:45:40.455541 | orchestrator | 2025-05-06 00:45:40.455554 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-06 00:45:40.455578 | orchestrator | Tuesday 06 May 2025 00:44:37 +0000 (0:00:02.300) 0:00:03.996 *********** 2025-05-06 00:45:40.455592 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-06 00:45:40.455606 | orchestrator | ok: [testbed-manager] 2025-05-06 00:45:40.455621 | orchestrator | 2025-05-06 00:45:40.455647 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-06 00:45:40.455667 | orchestrator | Tuesday 06 May 2025 00:45:18 +0000 (0:00:40.744) 0:00:44.741 *********** 2025-05-06 00:45:40.455682 | orchestrator | changed: [testbed-manager] 2025-05-06 00:45:40.455696 | orchestrator | 2025-05-06 00:45:40.455710 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-06 00:45:40.455724 | orchestrator | Tuesday 06 May 2025 00:45:19 +0000 (0:00:01.721) 0:00:46.463 *********** 2025-05-06 00:45:40.455738 | orchestrator | ok: [testbed-manager] 2025-05-06 00:45:40.455752 | orchestrator | 2025-05-06 00:45:40.455787 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-06 00:45:40.455802 | orchestrator | Tuesday 06 May 2025 00:45:21 +0000 (0:00:01.581) 0:00:48.044 *********** 2025-05-06 00:45:40.455816 | orchestrator | changed: [testbed-manager] 2025-05-06 00:45:40.455830 | orchestrator | 2025-05-06 00:45:40.455844 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-06 00:45:40.455858 | orchestrator | Tuesday 06 May 2025 00:45:23 +0000 (0:00:02.207) 0:00:50.251 *********** 2025-05-06 00:45:40.455872 | orchestrator | changed: [testbed-manager] 2025-05-06 00:45:40.455886 | orchestrator | 2025-05-06 00:45:40.455900 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-06 00:45:40.455914 | orchestrator | Tuesday 06 May 2025 00:45:24 +0000 (0:00:00.803) 0:00:51.055 *********** 2025-05-06 00:45:40.455928 | orchestrator | changed: [testbed-manager] 2025-05-06 00:45:40.455942 | orchestrator | 2025-05-06 00:45:40.455956 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-06 00:45:40.455970 | orchestrator | Tuesday 06 May 2025 00:45:25 +0000 (0:00:00.801) 0:00:51.857 *********** 2025-05-06 00:45:40.455983 | orchestrator | ok: [testbed-manager] 2025-05-06 00:45:40.455997 | orchestrator | 2025-05-06 00:45:40.456011 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:45:40.456025 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:45:40.456039 | orchestrator | 2025-05-06 00:45:40.456053 | orchestrator | Tuesday 06 May 2025 00:45:25 +0000 (0:00:00.424) 0:00:52.282 *********** 2025-05-06 00:45:40.456066 | orchestrator | =============================================================================== 2025-05-06 00:45:40.456080 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 40.74s 2025-05-06 00:45:40.456094 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.30s 2025-05-06 00:45:40.456108 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.21s 2025-05-06 00:45:40.456126 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.72s 2025-05-06 00:45:40.456140 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.58s 2025-05-06 00:45:40.456154 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.23s 2025-05-06 00:45:40.456168 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.80s 2025-05-06 00:45:40.456182 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.80s 2025-05-06 00:45:40.456196 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.43s 2025-05-06 00:45:40.456210 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.30s 2025-05-06 00:45:40.456224 | orchestrator | 2025-05-06 00:45:40.456238 | orchestrator | 2025-05-06 00:45:40.456251 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 00:45:40.456265 | orchestrator | 2025-05-06 00:45:40.456286 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 00:45:40.456300 | orchestrator | Tuesday 06 May 2025 00:44:33 +0000 (0:00:00.141) 0:00:00.141 *********** 2025-05-06 00:45:40.456314 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-06 00:45:40.456328 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-06 00:45:40.456342 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-06 00:45:40.456356 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-06 00:45:40.456370 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-06 00:45:40.456384 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-06 00:45:40.456400 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-06 00:45:40.456423 | orchestrator | 2025-05-06 00:45:40.456455 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-06 00:45:40.456486 | orchestrator | 2025-05-06 00:45:40.456510 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-06 00:45:40.456534 | orchestrator | Tuesday 06 May 2025 00:44:34 +0000 (0:00:00.788) 0:00:00.929 *********** 2025-05-06 00:45:40.456575 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:45:40.456592 | orchestrator | 2025-05-06 00:45:40.456607 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-06 00:45:40.456621 | orchestrator | Tuesday 06 May 2025 00:44:36 +0000 (0:00:01.917) 0:00:02.847 *********** 2025-05-06 00:45:40.456635 | orchestrator | ok: [testbed-manager] 2025-05-06 00:45:40.456649 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:45:40.456662 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:45:40.456676 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:45:40.456690 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:45:40.456704 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:45:40.456717 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:45:40.456731 | orchestrator | 2025-05-06 00:45:40.456745 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-06 00:45:40.456843 | orchestrator | Tuesday 06 May 2025 00:44:38 +0000 (0:00:02.639) 0:00:05.486 *********** 2025-05-06 00:45:40.456863 | orchestrator | ok: [testbed-manager] 2025-05-06 00:45:40.456878 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:45:40.456892 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:45:40.456905 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:45:40.456919 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:45:40.456933 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:45:40.456953 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:45:40.456967 | orchestrator | 2025-05-06 00:45:40.456981 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-06 00:45:40.456996 | orchestrator | Tuesday 06 May 2025 00:44:41 +0000 (0:00:03.068) 0:00:08.554 *********** 2025-05-06 00:45:40.457010 | orchestrator | changed: [testbed-manager] 2025-05-06 00:45:40.457024 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:45:40.457037 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:45:40.457051 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:45:40.457065 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:45:40.457078 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:45:40.457092 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:45:40.457106 | orchestrator | 2025-05-06 00:45:40.457120 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-06 00:45:40.457134 | orchestrator | Tuesday 06 May 2025 00:44:43 +0000 (0:00:02.196) 0:00:10.751 *********** 2025-05-06 00:45:40.457147 | orchestrator | changed: [testbed-manager] 2025-05-06 00:45:40.457160 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:45:40.457172 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:45:40.457192 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:45:40.457205 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:45:40.457217 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:45:40.457229 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:45:40.457241 | orchestrator | 2025-05-06 00:45:40.457254 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-06 00:45:40.457266 | orchestrator | Tuesday 06 May 2025 00:44:54 +0000 (0:00:10.139) 0:00:20.890 *********** 2025-05-06 00:45:40.457278 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:45:40.457291 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:45:40.457303 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:45:40.457315 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:45:40.457327 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:45:40.457339 | orchestrator | changed: [testbed-manager] 2025-05-06 00:45:40.457351 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:45:40.457363 | orchestrator | 2025-05-06 00:45:40.457376 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-06 00:45:40.457388 | orchestrator | Tuesday 06 May 2025 00:45:16 +0000 (0:00:22.366) 0:00:43.257 *********** 2025-05-06 00:45:40.457401 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:45:40.457418 | orchestrator | 2025-05-06 00:45:40.457431 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-06 00:45:40.457443 | orchestrator | Tuesday 06 May 2025 00:45:18 +0000 (0:00:02.030) 0:00:45.287 *********** 2025-05-06 00:45:40.457456 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-06 00:45:40.457468 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-06 00:45:40.457481 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-06 00:45:40.457493 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-06 00:45:40.457505 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-06 00:45:40.457517 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-06 00:45:40.457529 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-06 00:45:40.457541 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-06 00:45:40.457553 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-06 00:45:40.457565 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-06 00:45:40.457578 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-06 00:45:40.457590 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-06 00:45:40.457602 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-06 00:45:40.457614 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-06 00:45:40.457627 | orchestrator | 2025-05-06 00:45:40.457639 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-06 00:45:40.457652 | orchestrator | Tuesday 06 May 2025 00:45:24 +0000 (0:00:05.982) 0:00:51.270 *********** 2025-05-06 00:45:40.457664 | orchestrator | ok: [testbed-manager] 2025-05-06 00:45:40.457677 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:45:40.457689 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:45:40.457701 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:45:40.457713 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:45:40.457726 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:45:40.457738 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:45:40.457750 | orchestrator | 2025-05-06 00:45:40.457784 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-06 00:45:40.457800 | orchestrator | Tuesday 06 May 2025 00:45:26 +0000 (0:00:01.567) 0:00:52.837 *********** 2025-05-06 00:45:40.457812 | orchestrator | changed: [testbed-manager] 2025-05-06 00:45:40.457824 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:45:40.457837 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:45:40.457856 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:45:40.457868 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:45:40.457880 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:45:40.457892 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:45:40.457905 | orchestrator | 2025-05-06 00:45:40.457917 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-06 00:45:40.457934 | orchestrator | Tuesday 06 May 2025 00:45:27 +0000 (0:00:01.442) 0:00:54.279 *********** 2025-05-06 00:45:40.457947 | orchestrator | ok: [testbed-manager] 2025-05-06 00:45:40.457959 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:45:40.457972 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:45:40.457984 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:45:40.458002 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:45:40.458055 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:45:40.458071 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:45:40.458084 | orchestrator | 2025-05-06 00:45:40.458097 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-06 00:45:40.458109 | orchestrator | Tuesday 06 May 2025 00:45:28 +0000 (0:00:01.393) 0:00:55.673 *********** 2025-05-06 00:45:40.458121 | orchestrator | ok: [testbed-manager] 2025-05-06 00:45:40.458133 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:45:40.458145 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:45:40.458157 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:45:40.458169 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:45:40.458181 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:45:40.458193 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:45:40.458205 | orchestrator | 2025-05-06 00:45:40.458218 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-06 00:45:40.458230 | orchestrator | Tuesday 06 May 2025 00:45:31 +0000 (0:00:02.168) 0:00:57.841 *********** 2025-05-06 00:45:40.458242 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-06 00:45:40.458256 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:45:40.458269 | orchestrator | 2025-05-06 00:45:40.458282 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-06 00:45:40.458294 | orchestrator | Tuesday 06 May 2025 00:45:32 +0000 (0:00:01.711) 0:00:59.553 *********** 2025-05-06 00:45:40.458306 | orchestrator | changed: [testbed-manager] 2025-05-06 00:45:40.458319 | orchestrator | 2025-05-06 00:45:40.458332 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-06 00:45:40.458344 | orchestrator | Tuesday 06 May 2025 00:45:34 +0000 (0:00:01.668) 0:01:01.221 *********** 2025-05-06 00:45:40.458357 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:45:40.458376 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:45:40.458406 | orchestrator | changed: [testbed-manager] 2025-05-06 00:45:40.458420 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:45:40.458433 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:45:40.458445 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:45:40.458457 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:45:40.458470 | orchestrator | 2025-05-06 00:45:40.458482 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:45:40.458495 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:45:40.458508 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:45:40.458521 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:45:40.458580 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:45:40.458602 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:45:40.458614 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:45:40.458627 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:45:40.458639 | orchestrator | 2025-05-06 00:45:40.458652 | orchestrator | Tuesday 06 May 2025 00:45:37 +0000 (0:00:02.954) 0:01:04.176 *********** 2025-05-06 00:45:40.458664 | orchestrator | =============================================================================== 2025-05-06 00:45:40.458677 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 22.37s 2025-05-06 00:45:40.458689 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.14s 2025-05-06 00:45:40.458701 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.98s 2025-05-06 00:45:40.458714 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.07s 2025-05-06 00:45:40.458726 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.95s 2025-05-06 00:45:40.458738 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.64s 2025-05-06 00:45:40.458751 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.20s 2025-05-06 00:45:40.458781 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.17s 2025-05-06 00:45:40.458795 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.03s 2025-05-06 00:45:40.458807 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.92s 2025-05-06 00:45:40.458820 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.71s 2025-05-06 00:45:40.458832 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.67s 2025-05-06 00:45:40.458844 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.57s 2025-05-06 00:45:40.458857 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.44s 2025-05-06 00:45:40.458877 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.39s 2025-05-06 00:45:43.488369 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2025-05-06 00:45:43.488475 | orchestrator | 2025-05-06 00:45:40 | INFO  | Task 9634b2b7-5439-4ca1-baed-73b908a52d64 is in state SUCCESS 2025-05-06 00:45:43.488497 | orchestrator | 2025-05-06 00:45:40 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:45:43.488512 | orchestrator | 2025-05-06 00:45:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:45:43.488526 | orchestrator | 2025-05-06 00:45:40 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:45:43.488540 | orchestrator | 2025-05-06 00:45:40 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:45:43.488554 | orchestrator | 2025-05-06 00:45:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:45:43.488582 | orchestrator | 2025-05-06 00:45:43 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:45:43.489596 | orchestrator | 2025-05-06 00:45:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:45:43.490083 | orchestrator | 2025-05-06 00:45:43 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:45:43.490608 | orchestrator | 2025-05-06 00:45:43 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:45:46.531547 | orchestrator | 2025-05-06 00:45:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:45:46.531721 | orchestrator | 2025-05-06 00:45:46 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:45:46.531933 | orchestrator | 2025-05-06 00:45:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:45:46.533128 | orchestrator | 2025-05-06 00:45:46 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:45:46.533878 | orchestrator | 2025-05-06 00:45:46 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:45:46.534161 | orchestrator | 2025-05-06 00:45:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:45:49.581129 | orchestrator | 2025-05-06 00:45:49 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:45:49.581338 | orchestrator | 2025-05-06 00:45:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:45:49.581364 | orchestrator | 2025-05-06 00:45:49 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:45:49.581385 | orchestrator | 2025-05-06 00:45:49 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state STARTED 2025-05-06 00:45:52.630677 | orchestrator | 2025-05-06 00:45:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:45:52.630909 | orchestrator | 2025-05-06 00:45:52 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:45:55.656697 | orchestrator | 2025-05-06 00:45:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:45:55.656837 | orchestrator | 2025-05-06 00:45:52 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:45:55.656858 | orchestrator | 2025-05-06 00:45:52 | INFO  | Task 20f3084a-09c8-4a62-800e-4aa71d56fa98 is in state SUCCESS 2025-05-06 00:45:55.656875 | orchestrator | 2025-05-06 00:45:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:45:55.656905 | orchestrator | 2025-05-06 00:45:55 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:45:55.657568 | orchestrator | 2025-05-06 00:45:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:45:55.657604 | orchestrator | 2025-05-06 00:45:55 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:45:55.658430 | orchestrator | 2025-05-06 00:45:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:45:58.694162 | orchestrator | 2025-05-06 00:45:58 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:45:58.695006 | orchestrator | 2025-05-06 00:45:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:45:58.696947 | orchestrator | 2025-05-06 00:45:58 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:45:58.697039 | orchestrator | 2025-05-06 00:45:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:46:01.758663 | orchestrator | 2025-05-06 00:46:01 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:46:01.760548 | orchestrator | 2025-05-06 00:46:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:46:01.765056 | orchestrator | 2025-05-06 00:46:01 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:46:04.819247 | orchestrator | 2025-05-06 00:46:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:46:04.819389 | orchestrator | 2025-05-06 00:46:04 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:46:04.821692 | orchestrator | 2025-05-06 00:46:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:46:04.823419 | orchestrator | 2025-05-06 00:46:04 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:46:04.823493 | orchestrator | 2025-05-06 00:46:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:46:07.866988 | orchestrator | 2025-05-06 00:46:07 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:46:07.867380 | orchestrator | 2025-05-06 00:46:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:46:07.870673 | orchestrator | 2025-05-06 00:46:07 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:46:07.871148 | orchestrator | 2025-05-06 00:46:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:46:10.921856 | orchestrator | 2025-05-06 00:46:10 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:46:10.922120 | orchestrator | 2025-05-06 00:46:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:46:10.922611 | orchestrator | 2025-05-06 00:46:10 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:46:13.981040 | orchestrator | 2025-05-06 00:46:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:46:13.981169 | orchestrator | 2025-05-06 00:46:13 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:46:13.983442 | orchestrator | 2025-05-06 00:46:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:46:13.985835 | orchestrator | 2025-05-06 00:46:13 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:46:17.048560 | orchestrator | 2025-05-06 00:46:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:46:17.048707 | orchestrator | 2025-05-06 00:46:17 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:46:17.048887 | orchestrator | 2025-05-06 00:46:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:46:17.049944 | orchestrator | 2025-05-06 00:46:17 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:46:20.111169 | orchestrator | 2025-05-06 00:46:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:46:20.111307 | orchestrator | 2025-05-06 00:46:20 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:46:20.114262 | orchestrator | 2025-05-06 00:46:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:46:20.116320 | orchestrator | 2025-05-06 00:46:20 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:46:23.168182 | orchestrator | 2025-05-06 00:46:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:46:23.168339 | orchestrator | 2025-05-06 00:46:23 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:46:23.168969 | orchestrator | 2025-05-06 00:46:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:46:23.171687 | orchestrator | 2025-05-06 00:46:23 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:46:23.172142 | orchestrator | 2025-05-06 00:46:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:46:26.213689 | orchestrator | 2025-05-06 00:46:26 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:46:26.219167 | orchestrator | 2025-05-06 00:46:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:46:29.267856 | orchestrator | 2025-05-06 00:46:26 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:46:29.267979 | orchestrator | 2025-05-06 00:46:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:46:29.268017 | orchestrator | 2025-05-06 00:46:29 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:46:29.269632 | orchestrator | 2025-05-06 00:46:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:46:29.271492 | orchestrator | 2025-05-06 00:46:29 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:46:29.271722 | orchestrator | 2025-05-06 00:46:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:46:32.322453 | orchestrator | 2025-05-06 00:46:32 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:46:32.322981 | orchestrator | 2025-05-06 00:46:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:46:32.324285 | orchestrator | 2025-05-06 00:46:32 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:46:35.382864 | orchestrator | 2025-05-06 00:46:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:46:35.383015 | orchestrator | 2025-05-06 00:46:35 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:46:35.383312 | orchestrator | 2025-05-06 00:46:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:46:35.385582 | orchestrator | 2025-05-06 00:46:35 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:46:38.427669 | orchestrator | 2025-05-06 00:46:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:46:38.427909 | orchestrator | 2025-05-06 00:46:38 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:46:38.428038 | orchestrator | 2025-05-06 00:46:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:46:38.429032 | orchestrator | 2025-05-06 00:46:38 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:46:41.479602 | orchestrator | 2025-05-06 00:46:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:46:41.479820 | orchestrator | 2025-05-06 00:46:41 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:46:41.483043 | orchestrator | 2025-05-06 00:46:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:46:41.484265 | orchestrator | 2025-05-06 00:46:41 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:46:41.484429 | orchestrator | 2025-05-06 00:46:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:46:44.530977 | orchestrator | 2025-05-06 00:46:44 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:46:44.532935 | orchestrator | 2025-05-06 00:46:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:46:44.535115 | orchestrator | 2025-05-06 00:46:44 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:46:47.583143 | orchestrator | 2025-05-06 00:46:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:46:47.583302 | orchestrator | 2025-05-06 00:46:47 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:46:47.583434 | orchestrator | 2025-05-06 00:46:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:46:47.584496 | orchestrator | 2025-05-06 00:46:47 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:46:50.628714 | orchestrator | 2025-05-06 00:46:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:46:50.628882 | orchestrator | 2025-05-06 00:46:50 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:46:50.628985 | orchestrator | 2025-05-06 00:46:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:46:50.631385 | orchestrator | 2025-05-06 00:46:50 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:46:53.673786 | orchestrator | 2025-05-06 00:46:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:46:53.673940 | orchestrator | 2025-05-06 00:46:53 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:46:53.678421 | orchestrator | 2025-05-06 00:46:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:46:53.680621 | orchestrator | 2025-05-06 00:46:53 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state STARTED 2025-05-06 00:46:56.732397 | orchestrator | 2025-05-06 00:46:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:46:56.732536 | orchestrator | 2025-05-06 00:46:56 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:46:56.733918 | orchestrator | 2025-05-06 00:46:56 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:46:56.734816 | orchestrator | 2025-05-06 00:46:56 | INFO  | Task c8f276d6-fc79-4d5a-b3ab-178042fbe823 is in state STARTED 2025-05-06 00:46:56.735903 | orchestrator | 2025-05-06 00:46:56 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:46:56.736937 | orchestrator | 2025-05-06 00:46:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:46:56.741279 | orchestrator | 2025-05-06 00:46:56 | INFO  | Task 5fec06b0-c0aa-40e5-9e81-830a40505d2e is in state SUCCESS 2025-05-06 00:46:56.743885 | orchestrator | 2025-05-06 00:46:56.744002 | orchestrator | 2025-05-06 00:46:56.744023 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-06 00:46:56.744049 | orchestrator | 2025-05-06 00:46:56.744073 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-06 00:46:56.744089 | orchestrator | Tuesday 06 May 2025 00:44:47 +0000 (0:00:00.265) 0:00:00.265 *********** 2025-05-06 00:46:56.744103 | orchestrator | ok: [testbed-manager] 2025-05-06 00:46:56.744118 | orchestrator | 2025-05-06 00:46:56.744132 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-06 00:46:56.744147 | orchestrator | Tuesday 06 May 2025 00:44:48 +0000 (0:00:00.956) 0:00:01.221 *********** 2025-05-06 00:46:56.744161 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-06 00:46:56.744186 | orchestrator | 2025-05-06 00:46:56.744212 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-06 00:46:56.744234 | orchestrator | Tuesday 06 May 2025 00:44:49 +0000 (0:00:00.593) 0:00:01.815 *********** 2025-05-06 00:46:56.744249 | orchestrator | changed: [testbed-manager] 2025-05-06 00:46:56.744263 | orchestrator | 2025-05-06 00:46:56.744277 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-06 00:46:56.744291 | orchestrator | Tuesday 06 May 2025 00:44:50 +0000 (0:00:01.260) 0:00:03.075 *********** 2025-05-06 00:46:56.744305 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-06 00:46:56.744319 | orchestrator | ok: [testbed-manager] 2025-05-06 00:46:56.744333 | orchestrator | 2025-05-06 00:46:56.744347 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-06 00:46:56.744361 | orchestrator | Tuesday 06 May 2025 00:45:46 +0000 (0:00:55.630) 0:00:58.705 *********** 2025-05-06 00:46:56.744399 | orchestrator | changed: [testbed-manager] 2025-05-06 00:46:56.744415 | orchestrator | 2025-05-06 00:46:56.744430 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:46:56.744447 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:46:56.744464 | orchestrator | 2025-05-06 00:46:56.744479 | orchestrator | Tuesday 06 May 2025 00:45:49 +0000 (0:00:03.542) 0:01:02.247 *********** 2025-05-06 00:46:56.744496 | orchestrator | =============================================================================== 2025-05-06 00:46:56.744512 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 55.63s 2025-05-06 00:46:56.744528 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.54s 2025-05-06 00:46:56.744559 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.26s 2025-05-06 00:46:56.744576 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.96s 2025-05-06 00:46:56.744591 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.59s 2025-05-06 00:46:56.744608 | orchestrator | 2025-05-06 00:46:56.744623 | orchestrator | 2025-05-06 00:46:56.744640 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-06 00:46:56.744656 | orchestrator | 2025-05-06 00:46:56.744701 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-06 00:46:56.744717 | orchestrator | Tuesday 06 May 2025 00:44:29 +0000 (0:00:00.345) 0:00:00.345 *********** 2025-05-06 00:46:56.744732 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:46:56.744747 | orchestrator | 2025-05-06 00:46:56.744761 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-06 00:46:56.744774 | orchestrator | Tuesday 06 May 2025 00:44:30 +0000 (0:00:01.404) 0:00:01.749 *********** 2025-05-06 00:46:56.744788 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-06 00:46:56.744802 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-06 00:46:56.744816 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-06 00:46:56.744830 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-06 00:46:56.744843 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-06 00:46:56.744857 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-06 00:46:56.744871 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-06 00:46:56.744886 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-06 00:46:56.744900 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-06 00:46:56.744913 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-06 00:46:56.744935 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-06 00:46:56.744965 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-06 00:46:56.745008 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-06 00:46:56.745031 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-06 00:46:56.745046 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-06 00:46:56.745059 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-06 00:46:56.745078 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-06 00:46:56.745138 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-06 00:46:56.745154 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-06 00:46:56.745168 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-06 00:46:56.745183 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-06 00:46:56.745196 | orchestrator | 2025-05-06 00:46:56.745210 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-06 00:46:56.745224 | orchestrator | Tuesday 06 May 2025 00:44:34 +0000 (0:00:03.551) 0:00:05.301 *********** 2025-05-06 00:46:56.745238 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:46:56.745259 | orchestrator | 2025-05-06 00:46:56.745273 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-06 00:46:56.745287 | orchestrator | Tuesday 06 May 2025 00:44:35 +0000 (0:00:01.716) 0:00:07.018 *********** 2025-05-06 00:46:56.745305 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.745323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.745338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.745376 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.745392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.745414 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.745438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.745453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.745467 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.745482 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.745497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.745511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.745547 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.745565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.745579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.745594 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.745608 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.745622 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.745637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.745652 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.745726 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.745743 | orchestrator | 2025-05-06 00:46:56.745757 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-06 00:46:56.745780 | orchestrator | Tuesday 06 May 2025 00:44:41 +0000 (0:00:05.144) 0:00:12.162 *********** 2025-05-06 00:46:56.745816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-06 00:46:56.745833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.745854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.745870 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-06 00:46:56.745884 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.745899 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.745921 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:46:56.745937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-06 00:46:56.745984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.746000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.746092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-06 00:46:56.746113 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:46:56.746127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.746142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.746156 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:46:56.746171 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:46:56.746185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-06 00:46:56.746207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.746222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.746236 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:46:56.746259 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-06 00:46:56.746275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.746289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.746304 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:46:56.746318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-06 00:46:56.746333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.746369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.746384 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:46:56.746398 | orchestrator | 2025-05-06 00:46:56.746412 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-06 00:46:56.746426 | orchestrator | Tuesday 06 May 2025 00:44:42 +0000 (0:00:01.758) 0:00:13.921 *********** 2025-05-06 00:46:56.746452 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-06 00:46:56.746474 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.746924 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.747013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-06 00:46:56.747035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.747087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.747103 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:46:56.747120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-06 00:46:56.747136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.747166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.747182 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:46:56.747197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-06 00:46:56.747212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.747226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.747240 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:46:56.747263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-06 00:46:56.747284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.747300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.747314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-06 00:46:56.747340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.747355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.747370 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:46:56.747384 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:46:56.747398 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:46:56.747412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-06 00:46:56.747433 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.747448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.747462 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:46:56.747477 | orchestrator | 2025-05-06 00:46:56.747494 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-06 00:46:56.747512 | orchestrator | Tuesday 06 May 2025 00:44:45 +0000 (0:00:02.500) 0:00:16.421 *********** 2025-05-06 00:46:56.747529 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:46:56.747545 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:46:56.747562 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:46:56.747578 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:46:56.747594 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:46:56.747610 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:46:56.747626 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:46:56.747643 | orchestrator | 2025-05-06 00:46:56.747660 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-06 00:46:56.747701 | orchestrator | Tuesday 06 May 2025 00:44:46 +0000 (0:00:00.900) 0:00:17.322 *********** 2025-05-06 00:46:56.747717 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:46:56.747733 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:46:56.747749 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:46:56.747765 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:46:56.747781 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:46:56.747796 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:46:56.747811 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:46:56.747827 | orchestrator | 2025-05-06 00:46:56.747843 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-05-06 00:46:56.747859 | orchestrator | Tuesday 06 May 2025 00:44:47 +0000 (0:00:00.979) 0:00:18.301 *********** 2025-05-06 00:46:56.747875 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:46:56.747892 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:46:56.747908 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:46:56.747924 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:46:56.747940 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:46:56.747955 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:46:56.747968 | orchestrator | changed: [testbed-manager] 2025-05-06 00:46:56.747982 | orchestrator | 2025-05-06 00:46:56.747996 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-05-06 00:46:56.748010 | orchestrator | Tuesday 06 May 2025 00:45:24 +0000 (0:00:37.702) 0:00:56.003 *********** 2025-05-06 00:46:56.748024 | orchestrator | ok: [testbed-manager] 2025-05-06 00:46:56.748046 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:46:56.748060 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:46:56.748074 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:46:56.748088 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:46:56.748109 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:46:56.748123 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:46:56.748144 | orchestrator | 2025-05-06 00:46:56.748158 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-06 00:46:56.748173 | orchestrator | Tuesday 06 May 2025 00:45:27 +0000 (0:00:02.225) 0:00:58.228 *********** 2025-05-06 00:46:56.748188 | orchestrator | ok: [testbed-manager] 2025-05-06 00:46:56.748202 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:46:56.748216 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:46:56.748229 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:46:56.748243 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:46:56.748257 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:46:56.748271 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:46:56.748284 | orchestrator | 2025-05-06 00:46:56.748298 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-05-06 00:46:56.748312 | orchestrator | Tuesday 06 May 2025 00:45:28 +0000 (0:00:01.067) 0:00:59.296 *********** 2025-05-06 00:46:56.748326 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:46:56.748340 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:46:56.748354 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:46:56.748368 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:46:56.748381 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:46:56.748395 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:46:56.748409 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:46:56.748423 | orchestrator | 2025-05-06 00:46:56.748437 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-06 00:46:56.748451 | orchestrator | Tuesday 06 May 2025 00:45:29 +0000 (0:00:01.163) 0:01:00.459 *********** 2025-05-06 00:46:56.748465 | orchestrator | skipping: [testbed-manager] 2025-05-06 00:46:56.748478 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:46:56.748492 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:46:56.748506 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:46:56.748520 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:46:56.748533 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:46:56.748547 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:46:56.748561 | orchestrator | 2025-05-06 00:46:56.748575 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-06 00:46:56.748589 | orchestrator | Tuesday 06 May 2025 00:45:30 +0000 (0:00:00.860) 0:01:01.319 *********** 2025-05-06 00:46:56.748603 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.748618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.748638 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.748660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.748703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.748719 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.748733 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.748748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.748762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.748777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.748803 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.748835 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.748851 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.748865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.748880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.748899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.748913 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.748928 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.748949 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.748977 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.748992 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.749006 | orchestrator | 2025-05-06 00:46:56.749021 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-06 00:46:56.749035 | orchestrator | Tuesday 06 May 2025 00:45:35 +0000 (0:00:05.169) 0:01:06.489 *********** 2025-05-06 00:46:56.749049 | orchestrator | [WARNING]: Skipped 2025-05-06 00:46:56.749064 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-06 00:46:56.749078 | orchestrator | to this access issue: 2025-05-06 00:46:56.749092 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-06 00:46:56.749106 | orchestrator | directory 2025-05-06 00:46:56.749120 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-06 00:46:56.749134 | orchestrator | 2025-05-06 00:46:56.749148 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-06 00:46:56.749162 | orchestrator | Tuesday 06 May 2025 00:45:36 +0000 (0:00:00.845) 0:01:07.334 *********** 2025-05-06 00:46:56.749177 | orchestrator | [WARNING]: Skipped 2025-05-06 00:46:56.749196 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-06 00:46:56.749211 | orchestrator | to this access issue: 2025-05-06 00:46:56.749225 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-06 00:46:56.749239 | orchestrator | directory 2025-05-06 00:46:56.749253 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-06 00:46:56.749267 | orchestrator | 2025-05-06 00:46:56.749281 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-06 00:46:56.749296 | orchestrator | Tuesday 06 May 2025 00:45:36 +0000 (0:00:00.480) 0:01:07.815 *********** 2025-05-06 00:46:56.749310 | orchestrator | [WARNING]: Skipped 2025-05-06 00:46:56.749324 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-06 00:46:56.749338 | orchestrator | to this access issue: 2025-05-06 00:46:56.749357 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-06 00:46:56.749384 | orchestrator | directory 2025-05-06 00:46:56.749410 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-06 00:46:56.749444 | orchestrator | 2025-05-06 00:46:56.749470 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-06 00:46:56.749497 | orchestrator | Tuesday 06 May 2025 00:45:37 +0000 (0:00:00.490) 0:01:08.305 *********** 2025-05-06 00:46:56.749523 | orchestrator | [WARNING]: Skipped 2025-05-06 00:46:56.749550 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-06 00:46:56.749577 | orchestrator | to this access issue: 2025-05-06 00:46:56.749602 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-06 00:46:56.749616 | orchestrator | directory 2025-05-06 00:46:56.749630 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-06 00:46:56.749644 | orchestrator | 2025-05-06 00:46:56.749658 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-05-06 00:46:56.749732 | orchestrator | Tuesday 06 May 2025 00:45:37 +0000 (0:00:00.679) 0:01:08.985 *********** 2025-05-06 00:46:56.749747 | orchestrator | changed: [testbed-manager] 2025-05-06 00:46:56.749761 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:46:56.749775 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:46:56.749789 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:46:56.749803 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:46:56.749817 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:46:56.749830 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:46:56.749844 | orchestrator | 2025-05-06 00:46:56.749858 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-06 00:46:56.749872 | orchestrator | Tuesday 06 May 2025 00:45:41 +0000 (0:00:04.042) 0:01:13.027 *********** 2025-05-06 00:46:56.749886 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-06 00:46:56.749901 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-06 00:46:56.749915 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-06 00:46:56.749929 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-06 00:46:56.749943 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-06 00:46:56.749957 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-06 00:46:56.749971 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-06 00:46:56.749985 | orchestrator | 2025-05-06 00:46:56.749999 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-06 00:46:56.750051 | orchestrator | Tuesday 06 May 2025 00:45:44 +0000 (0:00:02.352) 0:01:15.380 *********** 2025-05-06 00:46:56.750068 | orchestrator | changed: [testbed-manager] 2025-05-06 00:46:56.750085 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:46:56.750099 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:46:56.750113 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:46:56.750127 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:46:56.750152 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:46:56.750169 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:46:56.750194 | orchestrator | 2025-05-06 00:46:56.750218 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-06 00:46:56.750246 | orchestrator | Tuesday 06 May 2025 00:45:46 +0000 (0:00:02.587) 0:01:17.967 *********** 2025-05-06 00:46:56.750269 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.750307 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.750322 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.750339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.750364 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.750397 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.750422 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.750459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.750496 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.750522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.750553 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.750577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.750602 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.750626 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.750660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.750726 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.750752 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.750776 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.750800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:46:56.750829 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.750855 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.750879 | orchestrator | 2025-05-06 00:46:56.750905 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-06 00:46:56.750928 | orchestrator | Tuesday 06 May 2025 00:45:49 +0000 (0:00:02.289) 0:01:20.257 *********** 2025-05-06 00:46:56.750949 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-06 00:46:56.750972 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-06 00:46:56.750993 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-06 00:46:56.751016 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-06 00:46:56.751036 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-06 00:46:56.751057 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-06 00:46:56.751095 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-06 00:46:56.751119 | orchestrator | 2025-05-06 00:46:56.751142 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-06 00:46:56.751183 | orchestrator | Tuesday 06 May 2025 00:45:52 +0000 (0:00:03.010) 0:01:23.267 *********** 2025-05-06 00:46:56.751210 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-06 00:46:56.751235 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-06 00:46:56.751259 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-06 00:46:56.751283 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-06 00:46:56.751306 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-06 00:46:56.751322 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-06 00:46:56.751336 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-06 00:46:56.751350 | orchestrator | 2025-05-06 00:46:56.751364 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-06 00:46:56.751378 | orchestrator | Tuesday 06 May 2025 00:45:54 +0000 (0:00:02.509) 0:01:25.777 *********** 2025-05-06 00:46:56.751399 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.751423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.751449 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.751475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.751498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.751556 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.751584 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.751611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.751638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.751653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.751700 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.751717 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-06 00:46:56.751749 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.751764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.751779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.751798 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.751813 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.751828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.751851 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.751885 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.751908 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:46:56.751929 | orchestrator | 2025-05-06 00:46:56.751953 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-06 00:46:56.751979 | orchestrator | Tuesday 06 May 2025 00:45:57 +0000 (0:00:03.289) 0:01:29.066 *********** 2025-05-06 00:46:56.751997 | orchestrator | changed: [testbed-manager] 2025-05-06 00:46:56.752019 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:46:56.752034 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:46:56.752101 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:46:56.752116 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:46:56.752130 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:46:56.752149 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:46:56.752163 | orchestrator | 2025-05-06 00:46:56.752177 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-06 00:46:56.752191 | orchestrator | Tuesday 06 May 2025 00:45:59 +0000 (0:00:01.455) 0:01:30.521 *********** 2025-05-06 00:46:56.752205 | orchestrator | changed: [testbed-manager] 2025-05-06 00:46:56.752219 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:46:56.752233 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:46:56.752246 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:46:56.752260 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:46:56.752274 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:46:56.752287 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:46:56.752301 | orchestrator | 2025-05-06 00:46:56.752315 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-06 00:46:56.752329 | orchestrator | Tuesday 06 May 2025 00:46:00 +0000 (0:00:01.381) 0:01:31.903 *********** 2025-05-06 00:46:56.752343 | orchestrator | 2025-05-06 00:46:56.752357 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-06 00:46:56.752371 | orchestrator | Tuesday 06 May 2025 00:46:00 +0000 (0:00:00.058) 0:01:31.961 *********** 2025-05-06 00:46:56.752384 | orchestrator | 2025-05-06 00:46:56.752398 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-06 00:46:56.752412 | orchestrator | Tuesday 06 May 2025 00:46:00 +0000 (0:00:00.054) 0:01:32.016 *********** 2025-05-06 00:46:56.752426 | orchestrator | 2025-05-06 00:46:56.752439 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-06 00:46:56.752453 | orchestrator | Tuesday 06 May 2025 00:46:00 +0000 (0:00:00.051) 0:01:32.068 *********** 2025-05-06 00:46:56.752467 | orchestrator | 2025-05-06 00:46:56.752481 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-06 00:46:56.752494 | orchestrator | Tuesday 06 May 2025 00:46:01 +0000 (0:00:00.233) 0:01:32.302 *********** 2025-05-06 00:46:56.752508 | orchestrator | 2025-05-06 00:46:56.752522 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-06 00:46:56.752536 | orchestrator | Tuesday 06 May 2025 00:46:01 +0000 (0:00:00.054) 0:01:32.356 *********** 2025-05-06 00:46:56.752549 | orchestrator | 2025-05-06 00:46:56.752563 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-06 00:46:56.752577 | orchestrator | Tuesday 06 May 2025 00:46:01 +0000 (0:00:00.050) 0:01:32.407 *********** 2025-05-06 00:46:56.752599 | orchestrator | 2025-05-06 00:46:56.752613 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-06 00:46:56.752627 | orchestrator | Tuesday 06 May 2025 00:46:01 +0000 (0:00:00.068) 0:01:32.475 *********** 2025-05-06 00:46:56.752640 | orchestrator | changed: [testbed-manager] 2025-05-06 00:46:56.752731 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:46:56.752746 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:46:56.752759 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:46:56.752773 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:46:56.752787 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:46:56.752801 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:46:56.752814 | orchestrator | 2025-05-06 00:46:56.752829 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-06 00:46:56.752843 | orchestrator | Tuesday 06 May 2025 00:46:09 +0000 (0:00:08.639) 0:01:41.115 *********** 2025-05-06 00:46:56.752856 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:46:56.752870 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:46:56.752884 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:46:56.752898 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:46:56.752911 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:46:56.752925 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:46:56.752938 | orchestrator | changed: [testbed-manager] 2025-05-06 00:46:56.752952 | orchestrator | 2025-05-06 00:46:56.752966 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-06 00:46:56.752980 | orchestrator | Tuesday 06 May 2025 00:46:39 +0000 (0:00:29.366) 0:02:10.481 *********** 2025-05-06 00:46:56.752993 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:46:56.753008 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:46:56.753022 | orchestrator | ok: [testbed-manager] 2025-05-06 00:46:56.753035 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:46:56.753049 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:46:56.753063 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:46:56.753077 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:46:56.753091 | orchestrator | 2025-05-06 00:46:56.753104 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-06 00:46:56.753118 | orchestrator | Tuesday 06 May 2025 00:46:41 +0000 (0:00:02.583) 0:02:13.064 *********** 2025-05-06 00:46:56.753132 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:46:56.753146 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:46:56.753160 | orchestrator | changed: [testbed-manager] 2025-05-06 00:46:56.753173 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:46:56.753187 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:46:56.753201 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:46:56.753215 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:46:56.753233 | orchestrator | 2025-05-06 00:46:56.753262 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:46:56.753300 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-06 00:46:56.753322 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-06 00:46:56.753344 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-06 00:46:56.753377 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-06 00:46:59.798634 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-06 00:46:59.798755 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-06 00:46:59.798789 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-06 00:46:59.798801 | orchestrator | 2025-05-06 00:46:59.798811 | orchestrator | 2025-05-06 00:46:59.798821 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 00:46:59.798831 | orchestrator | Tuesday 06 May 2025 00:46:54 +0000 (0:00:12.600) 0:02:25.665 *********** 2025-05-06 00:46:59.798840 | orchestrator | =============================================================================== 2025-05-06 00:46:59.798850 | orchestrator | common : Ensure fluentd image is present for label check --------------- 37.70s 2025-05-06 00:46:59.798859 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 29.37s 2025-05-06 00:46:59.798879 | orchestrator | common : Restart cron container ---------------------------------------- 12.60s 2025-05-06 00:46:59.798888 | orchestrator | common : Restart fluentd container -------------------------------------- 8.64s 2025-05-06 00:46:59.798898 | orchestrator | common : Copying over config.json files for services -------------------- 5.17s 2025-05-06 00:46:59.798907 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.14s 2025-05-06 00:46:59.798917 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 4.04s 2025-05-06 00:46:59.798926 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.55s 2025-05-06 00:46:59.798935 | orchestrator | common : Check common containers ---------------------------------------- 3.29s 2025-05-06 00:46:59.798945 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.01s 2025-05-06 00:46:59.798954 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.59s 2025-05-06 00:46:59.798963 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.58s 2025-05-06 00:46:59.798972 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.51s 2025-05-06 00:46:59.798982 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.50s 2025-05-06 00:46:59.798991 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.35s 2025-05-06 00:46:59.799000 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.29s 2025-05-06 00:46:59.799010 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 2.23s 2025-05-06 00:46:59.799019 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.76s 2025-05-06 00:46:59.799028 | orchestrator | common : include_tasks -------------------------------------------------- 1.72s 2025-05-06 00:46:59.799038 | orchestrator | common : Creating log volume -------------------------------------------- 1.46s 2025-05-06 00:46:59.799047 | orchestrator | 2025-05-06 00:46:56 | INFO  | Task 04010858-e87d-4e68-ac1d-758953ca8ac4 is in state STARTED 2025-05-06 00:46:59.799057 | orchestrator | 2025-05-06 00:46:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:46:59.799077 | orchestrator | 2025-05-06 00:46:59 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:46:59.799214 | orchestrator | 2025-05-06 00:46:59 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:46:59.800296 | orchestrator | 2025-05-06 00:46:59 | INFO  | Task c8f276d6-fc79-4d5a-b3ab-178042fbe823 is in state STARTED 2025-05-06 00:46:59.801182 | orchestrator | 2025-05-06 00:46:59 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:46:59.803478 | orchestrator | 2025-05-06 00:46:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:46:59.804123 | orchestrator | 2025-05-06 00:46:59 | INFO  | Task 04010858-e87d-4e68-ac1d-758953ca8ac4 is in state STARTED 2025-05-06 00:47:02.839009 | orchestrator | 2025-05-06 00:46:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:47:02.839892 | orchestrator | 2025-05-06 00:47:02 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:47:02.840270 | orchestrator | 2025-05-06 00:47:02 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:47:02.840382 | orchestrator | 2025-05-06 00:47:02 | INFO  | Task c8f276d6-fc79-4d5a-b3ab-178042fbe823 is in state STARTED 2025-05-06 00:47:02.842249 | orchestrator | 2025-05-06 00:47:02 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:47:02.843358 | orchestrator | 2025-05-06 00:47:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:47:02.843410 | orchestrator | 2025-05-06 00:47:02 | INFO  | Task 04010858-e87d-4e68-ac1d-758953ca8ac4 is in state STARTED 2025-05-06 00:47:05.893179 | orchestrator | 2025-05-06 00:47:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:47:05.893315 | orchestrator | 2025-05-06 00:47:05 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:47:05.897887 | orchestrator | 2025-05-06 00:47:05 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:47:05.898913 | orchestrator | 2025-05-06 00:47:05 | INFO  | Task c8f276d6-fc79-4d5a-b3ab-178042fbe823 is in state STARTED 2025-05-06 00:47:05.900536 | orchestrator | 2025-05-06 00:47:05 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:47:05.901779 | orchestrator | 2025-05-06 00:47:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:47:05.902952 | orchestrator | 2025-05-06 00:47:05 | INFO  | Task 04010858-e87d-4e68-ac1d-758953ca8ac4 is in state STARTED 2025-05-06 00:47:05.903864 | orchestrator | 2025-05-06 00:47:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:47:08.955140 | orchestrator | 2025-05-06 00:47:08 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:47:08.956107 | orchestrator | 2025-05-06 00:47:08 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:47:08.957183 | orchestrator | 2025-05-06 00:47:08 | INFO  | Task c8f276d6-fc79-4d5a-b3ab-178042fbe823 is in state STARTED 2025-05-06 00:47:08.958204 | orchestrator | 2025-05-06 00:47:08 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:47:08.959021 | orchestrator | 2025-05-06 00:47:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:47:08.960086 | orchestrator | 2025-05-06 00:47:08 | INFO  | Task 04010858-e87d-4e68-ac1d-758953ca8ac4 is in state STARTED 2025-05-06 00:47:08.960696 | orchestrator | 2025-05-06 00:47:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:47:11.995461 | orchestrator | 2025-05-06 00:47:11 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:47:11.995716 | orchestrator | 2025-05-06 00:47:11 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:47:11.996396 | orchestrator | 2025-05-06 00:47:11 | INFO  | Task c8f276d6-fc79-4d5a-b3ab-178042fbe823 is in state STARTED 2025-05-06 00:47:11.996969 | orchestrator | 2025-05-06 00:47:11 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:47:11.997757 | orchestrator | 2025-05-06 00:47:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:47:11.999696 | orchestrator | 2025-05-06 00:47:11 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:47:15.037107 | orchestrator | 2025-05-06 00:47:11 | INFO  | Task 04010858-e87d-4e68-ac1d-758953ca8ac4 is in state SUCCESS 2025-05-06 00:47:15.037250 | orchestrator | 2025-05-06 00:47:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:47:15.037292 | orchestrator | 2025-05-06 00:47:15 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:47:15.038788 | orchestrator | 2025-05-06 00:47:15 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:47:15.039924 | orchestrator | 2025-05-06 00:47:15 | INFO  | Task c8f276d6-fc79-4d5a-b3ab-178042fbe823 is in state STARTED 2025-05-06 00:47:15.043741 | orchestrator | 2025-05-06 00:47:15 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:47:15.043790 | orchestrator | 2025-05-06 00:47:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:47:15.045725 | orchestrator | 2025-05-06 00:47:15 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:47:18.081428 | orchestrator | 2025-05-06 00:47:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:47:18.081551 | orchestrator | 2025-05-06 00:47:18 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:47:18.081677 | orchestrator | 2025-05-06 00:47:18 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:47:18.082772 | orchestrator | 2025-05-06 00:47:18 | INFO  | Task c8f276d6-fc79-4d5a-b3ab-178042fbe823 is in state STARTED 2025-05-06 00:47:18.083529 | orchestrator | 2025-05-06 00:47:18 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:47:18.085733 | orchestrator | 2025-05-06 00:47:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:47:18.087839 | orchestrator | 2025-05-06 00:47:18 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:47:21.123381 | orchestrator | 2025-05-06 00:47:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:47:21.123496 | orchestrator | 2025-05-06 00:47:21 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:47:21.123817 | orchestrator | 2025-05-06 00:47:21 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:47:21.124490 | orchestrator | 2025-05-06 00:47:21 | INFO  | Task c8f276d6-fc79-4d5a-b3ab-178042fbe823 is in state STARTED 2025-05-06 00:47:21.125350 | orchestrator | 2025-05-06 00:47:21 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:47:21.129438 | orchestrator | 2025-05-06 00:47:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:47:21.129903 | orchestrator | 2025-05-06 00:47:21 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:47:21.130140 | orchestrator | 2025-05-06 00:47:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:47:24.156216 | orchestrator | 2025-05-06 00:47:24 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:47:24.156435 | orchestrator | 2025-05-06 00:47:24 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:47:24.158290 | orchestrator | 2025-05-06 00:47:24 | INFO  | Task c8f276d6-fc79-4d5a-b3ab-178042fbe823 is in state STARTED 2025-05-06 00:47:24.158976 | orchestrator | 2025-05-06 00:47:24 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:47:24.161372 | orchestrator | 2025-05-06 00:47:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:47:24.162283 | orchestrator | 2025-05-06 00:47:24 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:47:24.162430 | orchestrator | 2025-05-06 00:47:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:47:27.195527 | orchestrator | 2025-05-06 00:47:27 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:47:27.196836 | orchestrator | 2025-05-06 00:47:27 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:47:27.199934 | orchestrator | 2025-05-06 00:47:27 | INFO  | Task c8f276d6-fc79-4d5a-b3ab-178042fbe823 is in state STARTED 2025-05-06 00:47:27.200552 | orchestrator | 2025-05-06 00:47:27 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:47:27.201834 | orchestrator | 2025-05-06 00:47:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:47:27.203209 | orchestrator | 2025-05-06 00:47:27 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:47:30.246485 | orchestrator | 2025-05-06 00:47:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:47:30.246662 | orchestrator | 2025-05-06 00:47:30 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:47:30.247179 | orchestrator | 2025-05-06 00:47:30 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:47:30.248107 | orchestrator | 2025-05-06 00:47:30 | INFO  | Task c8f276d6-fc79-4d5a-b3ab-178042fbe823 is in state STARTED 2025-05-06 00:47:30.249010 | orchestrator | 2025-05-06 00:47:30 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:47:30.250110 | orchestrator | 2025-05-06 00:47:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:47:30.252174 | orchestrator | 2025-05-06 00:47:30 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:47:33.288961 | orchestrator | 2025-05-06 00:47:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:47:33.289084 | orchestrator | 2025-05-06 00:47:33 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:47:33.289500 | orchestrator | 2025-05-06 00:47:33 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:47:33.291719 | orchestrator | 2025-05-06 00:47:33 | INFO  | Task c8f276d6-fc79-4d5a-b3ab-178042fbe823 is in state SUCCESS 2025-05-06 00:47:33.292804 | orchestrator | 2025-05-06 00:47:33.292837 | orchestrator | 2025-05-06 00:47:33.292854 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 00:47:33.292870 | orchestrator | 2025-05-06 00:47:33.292887 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 00:47:33.292940 | orchestrator | Tuesday 06 May 2025 00:46:59 +0000 (0:00:00.293) 0:00:00.293 *********** 2025-05-06 00:47:33.292954 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:47:33.292970 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:47:33.292984 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:47:33.292998 | orchestrator | 2025-05-06 00:47:33.293012 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 00:47:33.293026 | orchestrator | Tuesday 06 May 2025 00:46:59 +0000 (0:00:00.333) 0:00:00.627 *********** 2025-05-06 00:47:33.293040 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-06 00:47:33.293055 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-06 00:47:33.293068 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-06 00:47:33.293082 | orchestrator | 2025-05-06 00:47:33.293096 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-06 00:47:33.293110 | orchestrator | 2025-05-06 00:47:33.293123 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-06 00:47:33.293138 | orchestrator | Tuesday 06 May 2025 00:47:00 +0000 (0:00:00.418) 0:00:01.045 *********** 2025-05-06 00:47:33.293177 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:47:33.293193 | orchestrator | 2025-05-06 00:47:33.293208 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-06 00:47:33.293223 | orchestrator | Tuesday 06 May 2025 00:47:00 +0000 (0:00:00.549) 0:00:01.595 *********** 2025-05-06 00:47:33.293237 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-06 00:47:33.293252 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-06 00:47:33.293267 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-06 00:47:33.293282 | orchestrator | 2025-05-06 00:47:33.293296 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-06 00:47:33.293311 | orchestrator | Tuesday 06 May 2025 00:47:01 +0000 (0:00:00.822) 0:00:02.418 *********** 2025-05-06 00:47:33.293325 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-06 00:47:33.293340 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-06 00:47:33.293355 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-06 00:47:33.293369 | orchestrator | 2025-05-06 00:47:33.293384 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-06 00:47:33.293399 | orchestrator | Tuesday 06 May 2025 00:47:03 +0000 (0:00:01.826) 0:00:04.244 *********** 2025-05-06 00:47:33.293414 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:47:33.293440 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:47:33.293455 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:47:33.293470 | orchestrator | 2025-05-06 00:47:33.293489 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-06 00:47:33.293504 | orchestrator | Tuesday 06 May 2025 00:47:05 +0000 (0:00:02.165) 0:00:06.409 *********** 2025-05-06 00:47:33.293519 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:47:33.293533 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:47:33.293548 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:47:33.293563 | orchestrator | 2025-05-06 00:47:33.293578 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:47:33.293592 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:47:33.293626 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:47:33.293642 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:47:33.293657 | orchestrator | 2025-05-06 00:47:33.293672 | orchestrator | 2025-05-06 00:47:33.293686 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 00:47:33.293701 | orchestrator | Tuesday 06 May 2025 00:47:09 +0000 (0:00:03.732) 0:00:10.141 *********** 2025-05-06 00:47:33.293715 | orchestrator | =============================================================================== 2025-05-06 00:47:33.293730 | orchestrator | memcached : Restart memcached container --------------------------------- 3.73s 2025-05-06 00:47:33.293745 | orchestrator | memcached : Check memcached container ----------------------------------- 2.17s 2025-05-06 00:47:33.293759 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.83s 2025-05-06 00:47:33.293774 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.82s 2025-05-06 00:47:33.293788 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.55s 2025-05-06 00:47:33.293803 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2025-05-06 00:47:33.293817 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-05-06 00:47:33.293832 | orchestrator | 2025-05-06 00:47:33.293846 | orchestrator | 2025-05-06 00:47:33.293861 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 00:47:33.293884 | orchestrator | 2025-05-06 00:47:33.293898 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 00:47:33.293913 | orchestrator | Tuesday 06 May 2025 00:46:58 +0000 (0:00:00.374) 0:00:00.374 *********** 2025-05-06 00:47:33.293927 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:47:33.293942 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:47:33.293957 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:47:33.293972 | orchestrator | 2025-05-06 00:47:33.293987 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 00:47:33.294012 | orchestrator | Tuesday 06 May 2025 00:46:59 +0000 (0:00:00.529) 0:00:00.904 *********** 2025-05-06 00:47:33.294085 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-06 00:47:33.294100 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-06 00:47:33.294114 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-06 00:47:33.294129 | orchestrator | 2025-05-06 00:47:33.294143 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-06 00:47:33.294158 | orchestrator | 2025-05-06 00:47:33.294172 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-06 00:47:33.294186 | orchestrator | Tuesday 06 May 2025 00:46:59 +0000 (0:00:00.299) 0:00:01.204 *********** 2025-05-06 00:47:33.294201 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:47:33.294215 | orchestrator | 2025-05-06 00:47:33.294230 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-06 00:47:33.294244 | orchestrator | Tuesday 06 May 2025 00:47:00 +0000 (0:00:00.561) 0:00:01.765 *********** 2025-05-06 00:47:33.294261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294426 | orchestrator | 2025-05-06 00:47:33.294441 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-06 00:47:33.294455 | orchestrator | Tuesday 06 May 2025 00:47:01 +0000 (0:00:01.395) 0:00:03.160 *********** 2025-05-06 00:47:33.294469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294584 | orchestrator | 2025-05-06 00:47:33.294599 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-06 00:47:33.294642 | orchestrator | Tuesday 06 May 2025 00:47:04 +0000 (0:00:02.787) 0:00:05.948 *********** 2025-05-06 00:47:33.294658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294761 | orchestrator | 2025-05-06 00:47:33.294775 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-06 00:47:33.294789 | orchestrator | Tuesday 06 May 2025 00:47:07 +0000 (0:00:03.220) 0:00:09.168 *********** 2025-05-06 00:47:33.294803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-06 00:47:33.294973 | orchestrator | 2025-05-06 00:47:33.294990 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-06 00:47:33.295004 | orchestrator | Tuesday 06 May 2025 00:47:10 +0000 (0:00:03.214) 0:00:12.395 *********** 2025-05-06 00:47:33.295018 | orchestrator | 2025-05-06 00:47:33.295032 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-06 00:47:33.295046 | orchestrator | Tuesday 06 May 2025 00:47:10 +0000 (0:00:00.186) 0:00:12.582 *********** 2025-05-06 00:47:33.295060 | orchestrator | 2025-05-06 00:47:33.295074 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-06 00:47:33.295088 | orchestrator | Tuesday 06 May 2025 00:47:10 +0000 (0:00:00.083) 0:00:12.665 *********** 2025-05-06 00:47:33.295102 | orchestrator | 2025-05-06 00:47:33.295115 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-06 00:47:33.295129 | orchestrator | Tuesday 06 May 2025 00:47:11 +0000 (0:00:00.293) 0:00:12.959 *********** 2025-05-06 00:47:33.295142 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:47:33.295156 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:47:33.295170 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:47:33.295190 | orchestrator | 2025-05-06 00:47:33.295204 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-06 00:47:33.295217 | orchestrator | Tuesday 06 May 2025 00:47:20 +0000 (0:00:08.989) 0:00:21.949 *********** 2025-05-06 00:47:33.295231 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:47:33.295245 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:47:33.295258 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:47:33.295272 | orchestrator | 2025-05-06 00:47:33.295286 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:47:33.295299 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:47:33.295314 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:47:33.295335 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:47:33.295349 | orchestrator | 2025-05-06 00:47:33.295363 | orchestrator | 2025-05-06 00:47:33.295377 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 00:47:33.295390 | orchestrator | Tuesday 06 May 2025 00:47:30 +0000 (0:00:10.246) 0:00:32.195 *********** 2025-05-06 00:47:33.295404 | orchestrator | =============================================================================== 2025-05-06 00:47:33.295418 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.25s 2025-05-06 00:47:33.295431 | orchestrator | redis : Restart redis container ----------------------------------------- 8.99s 2025-05-06 00:47:33.295445 | orchestrator | redis : Check redis containers ------------------------------------------ 3.23s 2025-05-06 00:47:33.295459 | orchestrator | redis : Copying over redis config files --------------------------------- 3.22s 2025-05-06 00:47:33.295472 | orchestrator | redis : Copying over default config.json files -------------------------- 2.79s 2025-05-06 00:47:33.295486 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.40s 2025-05-06 00:47:33.295500 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.56s 2025-05-06 00:47:33.295514 | orchestrator | redis : include_tasks --------------------------------------------------- 0.56s 2025-05-06 00:47:33.295528 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.53s 2025-05-06 00:47:33.295542 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.30s 2025-05-06 00:47:33.295556 | orchestrator | 2025-05-06 00:47:33 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:47:33.295579 | orchestrator | 2025-05-06 00:47:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:47:33.296383 | orchestrator | 2025-05-06 00:47:33 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:47:36.333180 | orchestrator | 2025-05-06 00:47:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:47:36.333325 | orchestrator | 2025-05-06 00:47:36 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:47:36.333512 | orchestrator | 2025-05-06 00:47:36 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:47:36.334265 | orchestrator | 2025-05-06 00:47:36 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:47:36.337277 | orchestrator | 2025-05-06 00:47:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:47:36.338209 | orchestrator | 2025-05-06 00:47:36 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:47:39.383293 | orchestrator | 2025-05-06 00:47:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:47:39.383413 | orchestrator | 2025-05-06 00:47:39 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:47:39.383541 | orchestrator | 2025-05-06 00:47:39 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:47:39.384117 | orchestrator | 2025-05-06 00:47:39 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:47:39.385101 | orchestrator | 2025-05-06 00:47:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:47:39.385300 | orchestrator | 2025-05-06 00:47:39 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:47:39.387804 | orchestrator | 2025-05-06 00:47:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:47:42.415494 | orchestrator | 2025-05-06 00:47:42 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:47:42.417083 | orchestrator | 2025-05-06 00:47:42 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:47:42.418819 | orchestrator | 2025-05-06 00:47:42 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:47:42.421008 | orchestrator | 2025-05-06 00:47:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:47:42.422378 | orchestrator | 2025-05-06 00:47:42 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:47:45.470683 | orchestrator | 2025-05-06 00:47:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:47:45.470811 | orchestrator | 2025-05-06 00:47:45 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:47:45.471915 | orchestrator | 2025-05-06 00:47:45 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:47:45.471952 | orchestrator | 2025-05-06 00:47:45 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:47:45.474849 | orchestrator | 2025-05-06 00:47:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:47:45.476091 | orchestrator | 2025-05-06 00:47:45 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:47:45.476964 | orchestrator | 2025-05-06 00:47:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:47:48.519977 | orchestrator | 2025-05-06 00:47:48 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:47:48.523374 | orchestrator | 2025-05-06 00:47:48 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:47:48.524018 | orchestrator | 2025-05-06 00:47:48 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:47:48.527071 | orchestrator | 2025-05-06 00:47:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:47:48.527319 | orchestrator | 2025-05-06 00:47:48 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:47:48.527399 | orchestrator | 2025-05-06 00:47:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:47:51.590306 | orchestrator | 2025-05-06 00:47:51 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:47:51.591506 | orchestrator | 2025-05-06 00:47:51 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:47:51.591566 | orchestrator | 2025-05-06 00:47:51 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:47:51.592499 | orchestrator | 2025-05-06 00:47:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:47:51.594250 | orchestrator | 2025-05-06 00:47:51 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:47:54.637682 | orchestrator | 2025-05-06 00:47:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:47:54.637834 | orchestrator | 2025-05-06 00:47:54 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:47:54.638379 | orchestrator | 2025-05-06 00:47:54 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:47:54.638828 | orchestrator | 2025-05-06 00:47:54 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:47:54.639677 | orchestrator | 2025-05-06 00:47:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:47:54.640299 | orchestrator | 2025-05-06 00:47:54 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:47:57.677997 | orchestrator | 2025-05-06 00:47:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:47:57.678259 | orchestrator | 2025-05-06 00:47:57 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:47:57.679702 | orchestrator | 2025-05-06 00:47:57 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:47:57.681405 | orchestrator | 2025-05-06 00:47:57 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:47:57.683132 | orchestrator | 2025-05-06 00:47:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:47:57.684856 | orchestrator | 2025-05-06 00:47:57 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:48:00.725038 | orchestrator | 2025-05-06 00:47:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:48:00.725256 | orchestrator | 2025-05-06 00:48:00 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:48:00.725365 | orchestrator | 2025-05-06 00:48:00 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:48:00.726129 | orchestrator | 2025-05-06 00:48:00 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:48:00.726824 | orchestrator | 2025-05-06 00:48:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:48:00.727766 | orchestrator | 2025-05-06 00:48:00 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:48:00.729079 | orchestrator | 2025-05-06 00:48:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:48:03.790796 | orchestrator | 2025-05-06 00:48:03 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:48:03.791225 | orchestrator | 2025-05-06 00:48:03 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:48:03.792009 | orchestrator | 2025-05-06 00:48:03 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:48:03.793112 | orchestrator | 2025-05-06 00:48:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:48:03.795167 | orchestrator | 2025-05-06 00:48:03 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:48:06.833784 | orchestrator | 2025-05-06 00:48:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:48:06.833934 | orchestrator | 2025-05-06 00:48:06 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:48:06.835608 | orchestrator | 2025-05-06 00:48:06 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:48:06.836708 | orchestrator | 2025-05-06 00:48:06 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:48:06.838357 | orchestrator | 2025-05-06 00:48:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:48:06.839600 | orchestrator | 2025-05-06 00:48:06 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:48:09.883934 | orchestrator | 2025-05-06 00:48:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:48:09.884077 | orchestrator | 2025-05-06 00:48:09 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:48:09.887156 | orchestrator | 2025-05-06 00:48:09 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state STARTED 2025-05-06 00:48:09.889213 | orchestrator | 2025-05-06 00:48:09 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:48:09.891867 | orchestrator | 2025-05-06 00:48:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:48:09.892423 | orchestrator | 2025-05-06 00:48:09 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:48:12.937899 | orchestrator | 2025-05-06 00:48:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:48:12.938096 | orchestrator | 2025-05-06 00:48:12 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:48:12.941258 | orchestrator | 2025-05-06 00:48:12.941316 | orchestrator | 2025-05-06 00:48:12.941334 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 00:48:12.941349 | orchestrator | 2025-05-06 00:48:12.941364 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 00:48:12.941379 | orchestrator | Tuesday 06 May 2025 00:47:00 +0000 (0:00:00.520) 0:00:00.520 *********** 2025-05-06 00:48:12.941394 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:48:12.941410 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:48:12.941425 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:48:12.941440 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:48:12.941454 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:48:12.941469 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:48:12.941483 | orchestrator | 2025-05-06 00:48:12.941498 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 00:48:12.941513 | orchestrator | Tuesday 06 May 2025 00:47:01 +0000 (0:00:00.651) 0:00:01.172 *********** 2025-05-06 00:48:12.941528 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-06 00:48:12.941543 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-06 00:48:12.941593 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-06 00:48:12.941607 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-06 00:48:12.941621 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-06 00:48:12.941644 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-06 00:48:12.941659 | orchestrator | 2025-05-06 00:48:12.941673 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-06 00:48:12.941687 | orchestrator | 2025-05-06 00:48:12.941700 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-06 00:48:12.941714 | orchestrator | Tuesday 06 May 2025 00:47:02 +0000 (0:00:01.234) 0:00:02.407 *********** 2025-05-06 00:48:12.941728 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:48:12.941744 | orchestrator | 2025-05-06 00:48:12.941758 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-06 00:48:12.941772 | orchestrator | Tuesday 06 May 2025 00:47:03 +0000 (0:00:01.175) 0:00:03.582 *********** 2025-05-06 00:48:12.941785 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-06 00:48:12.941799 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-06 00:48:12.941813 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-06 00:48:12.941827 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-06 00:48:12.941841 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-06 00:48:12.941862 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-06 00:48:12.941876 | orchestrator | 2025-05-06 00:48:12.941890 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-06 00:48:12.941948 | orchestrator | Tuesday 06 May 2025 00:47:04 +0000 (0:00:01.265) 0:00:04.848 *********** 2025-05-06 00:48:12.941970 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-06 00:48:12.942015 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-06 00:48:12.942146 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-06 00:48:12.942161 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-06 00:48:12.942175 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-06 00:48:12.942189 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-06 00:48:12.942202 | orchestrator | 2025-05-06 00:48:12.942216 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-06 00:48:12.942230 | orchestrator | Tuesday 06 May 2025 00:47:06 +0000 (0:00:01.789) 0:00:06.638 *********** 2025-05-06 00:48:12.942244 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-06 00:48:12.942258 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:48:12.942274 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-06 00:48:12.942288 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:48:12.942301 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-06 00:48:12.942315 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:48:12.942329 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-06 00:48:12.942343 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:48:12.942358 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-06 00:48:12.942371 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:48:12.942385 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-06 00:48:12.942399 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:48:12.942413 | orchestrator | 2025-05-06 00:48:12.942427 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-06 00:48:12.942441 | orchestrator | Tuesday 06 May 2025 00:47:09 +0000 (0:00:02.448) 0:00:09.087 *********** 2025-05-06 00:48:12.942455 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:48:12.942469 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:48:12.942483 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:48:12.942497 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:48:12.942511 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:48:12.942525 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:48:12.942539 | orchestrator | 2025-05-06 00:48:12.942659 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-06 00:48:12.942684 | orchestrator | Tuesday 06 May 2025 00:47:10 +0000 (0:00:00.881) 0:00:09.968 *********** 2025-05-06 00:48:12.942722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-06 00:48:12.942741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-06 00:48:12.942769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-06 00:48:12.942784 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-06 00:48:12.942798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-06 00:48:12.942860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-06 00:48:12.942877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-06 00:48:12.942890 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-06 00:48:12.942910 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-06 00:48:12.942923 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-06 00:48:12.942936 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-06 00:48:12.942956 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-06 00:48:12.942969 | orchestrator | 2025-05-06 00:48:12.942982 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-06 00:48:12.942995 | orchestrator | Tuesday 06 May 2025 00:47:12 +0000 (0:00:02.216) 0:00:12.185 *********** 2025-05-06 00:48:12.943008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943053 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943066 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943108 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943176 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943195 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943209 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943227 | orchestrator | 2025-05-06 00:48:12.943240 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-05-06 00:48:12.943252 | orchestrator | Tuesday 06 May 2025 00:47:14 +0000 (0:00:02.501) 0:00:14.686 *********** 2025-05-06 00:48:12.943264 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:48:12.943277 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:48:12.943289 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:48:12.943301 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:48:12.943313 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:48:12.943325 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:48:12.943337 | orchestrator | 2025-05-06 00:48:12.943350 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-05-06 00:48:12.943362 | orchestrator | Tuesday 06 May 2025 00:47:16 +0000 (0:00:02.160) 0:00:16.846 *********** 2025-05-06 00:48:12.943374 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:48:12.943387 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:48:12.943399 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:48:12.943411 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:48:12.943423 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:48:12.943435 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:48:12.943448 | orchestrator | 2025-05-06 00:48:12.943460 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-06 00:48:12.943472 | orchestrator | Tuesday 06 May 2025 00:47:19 +0000 (0:00:02.072) 0:00:18.919 *********** 2025-05-06 00:48:12.943485 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:48:12.943497 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:48:12.943509 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:48:12.943521 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:48:12.943534 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:48:12.943546 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:48:12.943578 | orchestrator | 2025-05-06 00:48:12.943591 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-06 00:48:12.943603 | orchestrator | Tuesday 06 May 2025 00:47:21 +0000 (0:00:02.169) 0:00:21.088 *********** 2025-05-06 00:48:12.943616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943676 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943689 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943702 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943780 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943793 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943806 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-06 00:48:12.943818 | orchestrator | 2025-05-06 00:48:12.943830 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-06 00:48:12.943843 | orchestrator | Tuesday 06 May 2025 00:47:24 +0000 (0:00:03.217) 0:00:24.306 *********** 2025-05-06 00:48:12.943855 | orchestrator | 2025-05-06 00:48:12.943868 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-06 00:48:12.943880 | orchestrator | Tuesday 06 May 2025 00:47:24 +0000 (0:00:00.110) 0:00:24.417 *********** 2025-05-06 00:48:12.943892 | orchestrator | 2025-05-06 00:48:12.943904 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-06 00:48:12.943917 | orchestrator | Tuesday 06 May 2025 00:47:24 +0000 (0:00:00.307) 0:00:24.724 *********** 2025-05-06 00:48:12.943929 | orchestrator | 2025-05-06 00:48:12.943941 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-06 00:48:12.943959 | orchestrator | Tuesday 06 May 2025 00:47:24 +0000 (0:00:00.097) 0:00:24.822 *********** 2025-05-06 00:48:12.943971 | orchestrator | 2025-05-06 00:48:12.943988 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-06 00:48:12.944001 | orchestrator | Tuesday 06 May 2025 00:47:25 +0000 (0:00:00.204) 0:00:25.026 *********** 2025-05-06 00:48:12.944013 | orchestrator | 2025-05-06 00:48:12.944040 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-06 00:48:12.944053 | orchestrator | Tuesday 06 May 2025 00:47:25 +0000 (0:00:00.088) 0:00:25.115 *********** 2025-05-06 00:48:12.944065 | orchestrator | 2025-05-06 00:48:12.944077 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-06 00:48:12.944090 | orchestrator | Tuesday 06 May 2025 00:47:25 +0000 (0:00:00.182) 0:00:25.298 *********** 2025-05-06 00:48:12.944102 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:48:12.944114 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:48:12.944126 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:48:12.944138 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:48:12.944150 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:48:12.944180 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:48:12.944193 | orchestrator | 2025-05-06 00:48:12.944206 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-06 00:48:12.944218 | orchestrator | Tuesday 06 May 2025 00:47:36 +0000 (0:00:10.749) 0:00:36.047 *********** 2025-05-06 00:48:12.944236 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:48:12.944249 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:48:12.944261 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:48:12.944273 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:48:12.944286 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:48:12.944298 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:48:12.944311 | orchestrator | 2025-05-06 00:48:12.944323 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-06 00:48:12.944335 | orchestrator | Tuesday 06 May 2025 00:47:38 +0000 (0:00:02.180) 0:00:38.228 *********** 2025-05-06 00:48:12.944347 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:48:12.944367 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:48:12.944380 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:48:12.944393 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:48:12.944405 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:48:12.944417 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:48:12.944430 | orchestrator | 2025-05-06 00:48:12.944442 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-06 00:48:12.944454 | orchestrator | Tuesday 06 May 2025 00:47:47 +0000 (0:00:09.582) 0:00:47.810 *********** 2025-05-06 00:48:12.944467 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-06 00:48:12.944479 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-06 00:48:12.944492 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-06 00:48:12.944504 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-06 00:48:12.944516 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-06 00:48:12.944529 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-06 00:48:12.944541 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-06 00:48:12.944607 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-06 00:48:12.944621 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-06 00:48:12.944640 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-06 00:48:12.944653 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-06 00:48:12.944665 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-06 00:48:12.944677 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-06 00:48:12.944694 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-06 00:48:12.944707 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-06 00:48:12.944719 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-06 00:48:12.944729 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-06 00:48:12.944739 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-06 00:48:12.944749 | orchestrator | 2025-05-06 00:48:12.944759 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-06 00:48:12.944769 | orchestrator | Tuesday 06 May 2025 00:47:55 +0000 (0:00:07.762) 0:00:55.573 *********** 2025-05-06 00:48:12.944779 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-06 00:48:12.944789 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:48:12.944799 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-06 00:48:12.944809 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:48:12.944819 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-06 00:48:12.944829 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:48:12.944839 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-06 00:48:12.944849 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-06 00:48:12.944859 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-06 00:48:12.944869 | orchestrator | 2025-05-06 00:48:12.944879 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-06 00:48:12.944889 | orchestrator | Tuesday 06 May 2025 00:47:57 +0000 (0:00:02.294) 0:00:57.868 *********** 2025-05-06 00:48:12.944899 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-06 00:48:12.944909 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:48:12.944919 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-06 00:48:12.944929 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:48:12.944939 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-06 00:48:12.944949 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:48:12.944960 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-06 00:48:12.944975 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-06 00:48:12.945055 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-06 00:48:12.945068 | orchestrator | 2025-05-06 00:48:12.945079 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-06 00:48:12.945089 | orchestrator | Tuesday 06 May 2025 00:48:01 +0000 (0:00:03.584) 0:01:01.452 *********** 2025-05-06 00:48:12.945099 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:48:12.945109 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:48:12.945119 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:48:12.945129 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:48:12.945139 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:48:12.945148 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:48:12.945158 | orchestrator | 2025-05-06 00:48:12.945168 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:48:12.945184 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-06 00:48:12.945196 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-06 00:48:12.945206 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-06 00:48:12.945216 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-06 00:48:12.945226 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-06 00:48:12.945240 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-06 00:48:12.945250 | orchestrator | 2025-05-06 00:48:12.945260 | orchestrator | 2025-05-06 00:48:12.945270 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 00:48:12.945280 | orchestrator | Tuesday 06 May 2025 00:48:09 +0000 (0:00:07.973) 0:01:09.426 *********** 2025-05-06 00:48:12.945290 | orchestrator | =============================================================================== 2025-05-06 00:48:12.945300 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.56s 2025-05-06 00:48:12.945310 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.75s 2025-05-06 00:48:12.945320 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.76s 2025-05-06 00:48:12.945330 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.58s 2025-05-06 00:48:12.945340 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.22s 2025-05-06 00:48:12.945350 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.50s 2025-05-06 00:48:12.945360 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.45s 2025-05-06 00:48:12.945370 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.29s 2025-05-06 00:48:12.945380 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.22s 2025-05-06 00:48:12.945390 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.18s 2025-05-06 00:48:12.945403 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.17s 2025-05-06 00:48:12.945413 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 2.16s 2025-05-06 00:48:12.945423 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 2.07s 2025-05-06 00:48:12.945433 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.79s 2025-05-06 00:48:12.945443 | orchestrator | module-load : Load modules ---------------------------------------------- 1.27s 2025-05-06 00:48:12.945453 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.23s 2025-05-06 00:48:12.945463 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.18s 2025-05-06 00:48:12.945473 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.99s 2025-05-06 00:48:12.945483 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.88s 2025-05-06 00:48:12.945493 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.65s 2025-05-06 00:48:12.945503 | orchestrator | 2025-05-06 00:48:12 | INFO  | Task d0ba1670-cd4d-43ab-a27e-3f63c75bedda is in state SUCCESS 2025-05-06 00:48:12.945513 | orchestrator | 2025-05-06 00:48:12 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:48:12.945523 | orchestrator | 2025-05-06 00:48:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:48:12.945539 | orchestrator | 2025-05-06 00:48:12 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:48:12.945889 | orchestrator | 2025-05-06 00:48:12 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:48:16.002298 | orchestrator | 2025-05-06 00:48:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:48:16.002443 | orchestrator | 2025-05-06 00:48:15 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:48:19.026691 | orchestrator | 2025-05-06 00:48:15 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:48:19.026892 | orchestrator | 2025-05-06 00:48:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:48:19.026919 | orchestrator | 2025-05-06 00:48:15 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:48:19.026931 | orchestrator | 2025-05-06 00:48:15 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:48:19.026942 | orchestrator | 2025-05-06 00:48:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:48:19.026968 | orchestrator | 2025-05-06 00:48:19 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:48:19.027044 | orchestrator | 2025-05-06 00:48:19 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:48:19.028170 | orchestrator | 2025-05-06 00:48:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:48:19.028686 | orchestrator | 2025-05-06 00:48:19 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:48:19.030756 | orchestrator | 2025-05-06 00:48:19 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:48:22.068379 | orchestrator | 2025-05-06 00:48:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:48:22.068493 | orchestrator | 2025-05-06 00:48:22 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:48:22.071271 | orchestrator | 2025-05-06 00:48:22 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:48:22.072293 | orchestrator | 2025-05-06 00:48:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:48:22.072995 | orchestrator | 2025-05-06 00:48:22 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:48:22.074708 | orchestrator | 2025-05-06 00:48:22 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:48:25.118287 | orchestrator | 2025-05-06 00:48:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:48:25.118473 | orchestrator | 2025-05-06 00:48:25 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:48:25.119595 | orchestrator | 2025-05-06 00:48:25 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:48:25.120982 | orchestrator | 2025-05-06 00:48:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:48:25.122271 | orchestrator | 2025-05-06 00:48:25 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:48:25.123499 | orchestrator | 2025-05-06 00:48:25 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:48:28.166903 | orchestrator | 2025-05-06 00:48:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:48:28.167049 | orchestrator | 2025-05-06 00:48:28 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:48:28.169964 | orchestrator | 2025-05-06 00:48:28 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:48:28.170642 | orchestrator | 2025-05-06 00:48:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:48:28.170676 | orchestrator | 2025-05-06 00:48:28 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:48:28.170698 | orchestrator | 2025-05-06 00:48:28 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:48:28.171301 | orchestrator | 2025-05-06 00:48:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:48:31.223300 | orchestrator | 2025-05-06 00:48:31 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:48:31.224789 | orchestrator | 2025-05-06 00:48:31 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:48:31.226456 | orchestrator | 2025-05-06 00:48:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:48:31.227844 | orchestrator | 2025-05-06 00:48:31 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:48:31.228800 | orchestrator | 2025-05-06 00:48:31 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:48:31.230607 | orchestrator | 2025-05-06 00:48:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:48:34.271533 | orchestrator | 2025-05-06 00:48:34 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:48:34.274749 | orchestrator | 2025-05-06 00:48:34 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:48:34.276454 | orchestrator | 2025-05-06 00:48:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:48:34.279104 | orchestrator | 2025-05-06 00:48:34 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:48:34.281725 | orchestrator | 2025-05-06 00:48:34 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:48:37.320751 | orchestrator | 2025-05-06 00:48:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:48:37.320936 | orchestrator | 2025-05-06 00:48:37 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:48:37.320999 | orchestrator | 2025-05-06 00:48:37 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:48:37.321890 | orchestrator | 2025-05-06 00:48:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:48:37.322500 | orchestrator | 2025-05-06 00:48:37 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:48:37.323104 | orchestrator | 2025-05-06 00:48:37 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:48:40.370931 | orchestrator | 2025-05-06 00:48:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:48:40.371059 | orchestrator | 2025-05-06 00:48:40 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:48:40.373393 | orchestrator | 2025-05-06 00:48:40 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:48:40.375226 | orchestrator | 2025-05-06 00:48:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:48:40.377065 | orchestrator | 2025-05-06 00:48:40 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:48:40.378871 | orchestrator | 2025-05-06 00:48:40 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:48:43.407591 | orchestrator | 2025-05-06 00:48:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:48:43.407725 | orchestrator | 2025-05-06 00:48:43 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:48:43.409684 | orchestrator | 2025-05-06 00:48:43 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:48:43.411305 | orchestrator | 2025-05-06 00:48:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:48:43.412655 | orchestrator | 2025-05-06 00:48:43 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:48:43.414219 | orchestrator | 2025-05-06 00:48:43 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:48:43.414356 | orchestrator | 2025-05-06 00:48:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:48:46.476225 | orchestrator | 2025-05-06 00:48:46 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:48:46.476527 | orchestrator | 2025-05-06 00:48:46 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:48:46.479240 | orchestrator | 2025-05-06 00:48:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:48:46.480509 | orchestrator | 2025-05-06 00:48:46 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:48:46.481883 | orchestrator | 2025-05-06 00:48:46 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:48:46.482118 | orchestrator | 2025-05-06 00:48:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:48:49.531418 | orchestrator | 2025-05-06 00:48:49 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:48:49.531673 | orchestrator | 2025-05-06 00:48:49 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:48:49.532200 | orchestrator | 2025-05-06 00:48:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:48:49.532994 | orchestrator | 2025-05-06 00:48:49 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:48:49.533980 | orchestrator | 2025-05-06 00:48:49 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:48:52.568952 | orchestrator | 2025-05-06 00:48:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:48:52.569050 | orchestrator | 2025-05-06 00:48:52 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:48:52.569400 | orchestrator | 2025-05-06 00:48:52 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:48:52.570126 | orchestrator | 2025-05-06 00:48:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:48:52.570748 | orchestrator | 2025-05-06 00:48:52 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:48:52.571388 | orchestrator | 2025-05-06 00:48:52 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:48:52.574763 | orchestrator | 2025-05-06 00:48:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:48:55.619242 | orchestrator | 2025-05-06 00:48:55 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:48:55.620147 | orchestrator | 2025-05-06 00:48:55 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:48:55.621906 | orchestrator | 2025-05-06 00:48:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:48:55.623206 | orchestrator | 2025-05-06 00:48:55 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:48:55.624669 | orchestrator | 2025-05-06 00:48:55 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:48:55.624781 | orchestrator | 2025-05-06 00:48:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:48:58.678633 | orchestrator | 2025-05-06 00:48:58 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:48:58.683953 | orchestrator | 2025-05-06 00:48:58 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:48:58.686217 | orchestrator | 2025-05-06 00:48:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:48:58.686251 | orchestrator | 2025-05-06 00:48:58 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:48:58.686275 | orchestrator | 2025-05-06 00:48:58 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:49:01.737860 | orchestrator | 2025-05-06 00:48:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:49:01.838168 | orchestrator | 2025-05-06 00:49:01 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:49:04.775998 | orchestrator | 2025-05-06 00:49:01 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:49:04.776101 | orchestrator | 2025-05-06 00:49:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:49:04.776122 | orchestrator | 2025-05-06 00:49:01 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:49:04.776138 | orchestrator | 2025-05-06 00:49:01 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:49:04.776154 | orchestrator | 2025-05-06 00:49:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:49:04.776186 | orchestrator | 2025-05-06 00:49:04 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:49:04.777860 | orchestrator | 2025-05-06 00:49:04 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:49:04.779847 | orchestrator | 2025-05-06 00:49:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:49:04.782010 | orchestrator | 2025-05-06 00:49:04 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:49:04.783763 | orchestrator | 2025-05-06 00:49:04 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:49:04.783843 | orchestrator | 2025-05-06 00:49:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:49:07.821688 | orchestrator | 2025-05-06 00:49:07 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:49:07.822179 | orchestrator | 2025-05-06 00:49:07 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:49:07.822239 | orchestrator | 2025-05-06 00:49:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:49:07.822705 | orchestrator | 2025-05-06 00:49:07 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:49:07.823415 | orchestrator | 2025-05-06 00:49:07 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:49:10.853481 | orchestrator | 2025-05-06 00:49:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:49:10.853641 | orchestrator | 2025-05-06 00:49:10 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:49:10.853726 | orchestrator | 2025-05-06 00:49:10 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:49:10.855750 | orchestrator | 2025-05-06 00:49:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:49:10.856110 | orchestrator | 2025-05-06 00:49:10 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:49:10.856140 | orchestrator | 2025-05-06 00:49:10 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:49:13.897798 | orchestrator | 2025-05-06 00:49:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:49:13.897922 | orchestrator | 2025-05-06 00:49:13 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:49:13.898186 | orchestrator | 2025-05-06 00:49:13 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:49:13.898223 | orchestrator | 2025-05-06 00:49:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:49:13.899076 | orchestrator | 2025-05-06 00:49:13 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:49:13.899839 | orchestrator | 2025-05-06 00:49:13 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:49:16.944631 | orchestrator | 2025-05-06 00:49:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:49:16.944782 | orchestrator | 2025-05-06 00:49:16 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:49:16.945021 | orchestrator | 2025-05-06 00:49:16 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:49:16.946318 | orchestrator | 2025-05-06 00:49:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:49:16.947393 | orchestrator | 2025-05-06 00:49:16 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:49:16.949548 | orchestrator | 2025-05-06 00:49:16 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:49:16.949856 | orchestrator | 2025-05-06 00:49:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:49:19.983629 | orchestrator | 2025-05-06 00:49:19 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:49:19.983886 | orchestrator | 2025-05-06 00:49:19 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:49:19.984914 | orchestrator | 2025-05-06 00:49:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:49:19.986330 | orchestrator | 2025-05-06 00:49:19 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:49:19.987001 | orchestrator | 2025-05-06 00:49:19 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:49:23.034904 | orchestrator | 2025-05-06 00:49:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:49:23.035020 | orchestrator | 2025-05-06 00:49:23 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:49:23.035228 | orchestrator | 2025-05-06 00:49:23 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:49:23.039228 | orchestrator | 2025-05-06 00:49:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:49:23.040377 | orchestrator | 2025-05-06 00:49:23 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:49:23.040966 | orchestrator | 2025-05-06 00:49:23 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:49:23.041293 | orchestrator | 2025-05-06 00:49:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:49:26.088979 | orchestrator | 2025-05-06 00:49:26 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:49:26.092863 | orchestrator | 2025-05-06 00:49:26 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:49:26.092906 | orchestrator | 2025-05-06 00:49:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:49:26.092953 | orchestrator | 2025-05-06 00:49:26 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state STARTED 2025-05-06 00:49:26.097795 | orchestrator | 2025-05-06 00:49:26 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:49:29.137600 | orchestrator | 2025-05-06 00:49:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:49:29.137744 | orchestrator | 2025-05-06 00:49:29 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:49:29.141715 | orchestrator | 2025-05-06 00:49:29 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:49:29.142222 | orchestrator | 2025-05-06 00:49:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:49:29.143007 | orchestrator | 2025-05-06 00:49:29 | INFO  | Task 5412bda7-5348-4a21-8c6e-6bac35bcf28e is in state SUCCESS 2025-05-06 00:49:29.144923 | orchestrator | 2025-05-06 00:49:29.144971 | orchestrator | 2025-05-06 00:49:29.144986 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-06 00:49:29.145000 | orchestrator | 2025-05-06 00:49:29.145014 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-06 00:49:29.145029 | orchestrator | Tuesday 06 May 2025 00:47:13 +0000 (0:00:00.103) 0:00:00.103 *********** 2025-05-06 00:49:29.145043 | orchestrator | ok: [localhost] => { 2025-05-06 00:49:29.145059 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-06 00:49:29.145073 | orchestrator | } 2025-05-06 00:49:29.145087 | orchestrator | 2025-05-06 00:49:29.145101 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-06 00:49:29.145115 | orchestrator | Tuesday 06 May 2025 00:47:13 +0000 (0:00:00.035) 0:00:00.139 *********** 2025-05-06 00:49:29.145129 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-06 00:49:29.145144 | orchestrator | ...ignoring 2025-05-06 00:49:29.145158 | orchestrator | 2025-05-06 00:49:29.145172 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-06 00:49:29.145185 | orchestrator | Tuesday 06 May 2025 00:47:16 +0000 (0:00:02.887) 0:00:03.027 *********** 2025-05-06 00:49:29.145199 | orchestrator | skipping: [localhost] 2025-05-06 00:49:29.145213 | orchestrator | 2025-05-06 00:49:29.145226 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-06 00:49:29.145240 | orchestrator | Tuesday 06 May 2025 00:47:16 +0000 (0:00:00.042) 0:00:03.069 *********** 2025-05-06 00:49:29.145254 | orchestrator | ok: [localhost] 2025-05-06 00:49:29.145267 | orchestrator | 2025-05-06 00:49:29.145281 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 00:49:29.145295 | orchestrator | 2025-05-06 00:49:29.145308 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 00:49:29.145322 | orchestrator | Tuesday 06 May 2025 00:47:16 +0000 (0:00:00.125) 0:00:03.194 *********** 2025-05-06 00:49:29.145337 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:49:29.145350 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:49:29.145364 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:49:29.145378 | orchestrator | 2025-05-06 00:49:29.145392 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 00:49:29.145405 | orchestrator | Tuesday 06 May 2025 00:47:16 +0000 (0:00:00.315) 0:00:03.510 *********** 2025-05-06 00:49:29.145507 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-06 00:49:29.145526 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-06 00:49:29.145543 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-06 00:49:29.145558 | orchestrator | 2025-05-06 00:49:29.145574 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-06 00:49:29.145590 | orchestrator | 2025-05-06 00:49:29.145605 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-06 00:49:29.145621 | orchestrator | Tuesday 06 May 2025 00:47:17 +0000 (0:00:00.499) 0:00:04.009 *********** 2025-05-06 00:49:29.145637 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:49:29.145653 | orchestrator | 2025-05-06 00:49:29.145669 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-06 00:49:29.145684 | orchestrator | Tuesday 06 May 2025 00:47:18 +0000 (0:00:00.821) 0:00:04.830 *********** 2025-05-06 00:49:29.145699 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:49:29.145715 | orchestrator | 2025-05-06 00:49:29.145731 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-06 00:49:29.145747 | orchestrator | Tuesday 06 May 2025 00:47:19 +0000 (0:00:01.139) 0:00:05.970 *********** 2025-05-06 00:49:29.145762 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:49:29.145779 | orchestrator | 2025-05-06 00:49:29.145795 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-06 00:49:29.145820 | orchestrator | Tuesday 06 May 2025 00:47:19 +0000 (0:00:00.601) 0:00:06.571 *********** 2025-05-06 00:49:29.145836 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:49:29.145850 | orchestrator | 2025-05-06 00:49:29.145864 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-06 00:49:29.145877 | orchestrator | Tuesday 06 May 2025 00:47:21 +0000 (0:00:01.415) 0:00:07.986 *********** 2025-05-06 00:49:29.145891 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:49:29.145905 | orchestrator | 2025-05-06 00:49:29.145918 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-06 00:49:29.145932 | orchestrator | Tuesday 06 May 2025 00:47:22 +0000 (0:00:00.893) 0:00:08.880 *********** 2025-05-06 00:49:29.145945 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:49:29.145959 | orchestrator | 2025-05-06 00:49:29.145973 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-06 00:49:29.145987 | orchestrator | Tuesday 06 May 2025 00:47:22 +0000 (0:00:00.423) 0:00:09.303 *********** 2025-05-06 00:49:29.146000 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:49:29.146104 | orchestrator | 2025-05-06 00:49:29.146124 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-06 00:49:29.146138 | orchestrator | Tuesday 06 May 2025 00:47:23 +0000 (0:00:00.933) 0:00:10.237 *********** 2025-05-06 00:49:29.146152 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:49:29.146166 | orchestrator | 2025-05-06 00:49:29.146180 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-06 00:49:29.146194 | orchestrator | Tuesday 06 May 2025 00:47:24 +0000 (0:00:00.785) 0:00:11.022 *********** 2025-05-06 00:49:29.146208 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:49:29.146221 | orchestrator | 2025-05-06 00:49:29.146235 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-06 00:49:29.146249 | orchestrator | Tuesday 06 May 2025 00:47:24 +0000 (0:00:00.472) 0:00:11.494 *********** 2025-05-06 00:49:29.146263 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:49:29.146277 | orchestrator | 2025-05-06 00:49:29.146300 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-06 00:49:29.146315 | orchestrator | Tuesday 06 May 2025 00:47:25 +0000 (0:00:00.311) 0:00:11.806 *********** 2025-05-06 00:49:29.146355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-06 00:49:29.146384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-06 00:49:29.146399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-06 00:49:29.146445 | orchestrator | 2025-05-06 00:49:29.146460 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-06 00:49:29.146474 | orchestrator | Tuesday 06 May 2025 00:47:26 +0000 (0:00:01.216) 0:00:13.022 *********** 2025-05-06 00:49:29.146499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-06 00:49:29.146522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-06 00:49:29.146537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-06 00:49:29.146552 | orchestrator | 2025-05-06 00:49:29.146566 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-06 00:49:29.146580 | orchestrator | Tuesday 06 May 2025 00:47:28 +0000 (0:00:01.722) 0:00:14.744 *********** 2025-05-06 00:49:29.146594 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-06 00:49:29.146608 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-06 00:49:29.146622 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-06 00:49:29.146636 | orchestrator | 2025-05-06 00:49:29.146650 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-06 00:49:29.146664 | orchestrator | Tuesday 06 May 2025 00:47:29 +0000 (0:00:01.702) 0:00:16.447 *********** 2025-05-06 00:49:29.146678 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-06 00:49:29.146707 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-06 00:49:29.146721 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-06 00:49:29.146734 | orchestrator | 2025-05-06 00:49:29.146748 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-06 00:49:29.146769 | orchestrator | Tuesday 06 May 2025 00:47:31 +0000 (0:00:01.948) 0:00:18.395 *********** 2025-05-06 00:49:29.146783 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-06 00:49:29.146796 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-06 00:49:29.146810 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-06 00:49:29.146824 | orchestrator | 2025-05-06 00:49:29.146844 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-06 00:49:29.146859 | orchestrator | Tuesday 06 May 2025 00:47:33 +0000 (0:00:01.753) 0:00:20.148 *********** 2025-05-06 00:49:29.146873 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-06 00:49:29.146886 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-06 00:49:29.146900 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-06 00:49:29.146914 | orchestrator | 2025-05-06 00:49:29.146928 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-06 00:49:29.146942 | orchestrator | Tuesday 06 May 2025 00:47:35 +0000 (0:00:01.874) 0:00:22.023 *********** 2025-05-06 00:49:29.146956 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-06 00:49:29.146969 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-06 00:49:29.146983 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-06 00:49:29.146997 | orchestrator | 2025-05-06 00:49:29.147010 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-06 00:49:29.147029 | orchestrator | Tuesday 06 May 2025 00:47:37 +0000 (0:00:01.992) 0:00:24.015 *********** 2025-05-06 00:49:29.147044 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-06 00:49:29.147058 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-06 00:49:29.147075 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-06 00:49:29.147089 | orchestrator | 2025-05-06 00:49:29.147103 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-06 00:49:29.147181 | orchestrator | Tuesday 06 May 2025 00:47:39 +0000 (0:00:02.288) 0:00:26.304 *********** 2025-05-06 00:49:29.147197 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:49:29.147211 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:49:29.147225 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:49:29.147239 | orchestrator | 2025-05-06 00:49:29.147253 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-06 00:49:29.147267 | orchestrator | Tuesday 06 May 2025 00:47:40 +0000 (0:00:01.026) 0:00:27.330 *********** 2025-05-06 00:49:29.147283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-06 00:49:29.147306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-06 00:49:29.147331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-06 00:49:29.147347 | orchestrator | 2025-05-06 00:49:29.147361 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-06 00:49:29.147375 | orchestrator | Tuesday 06 May 2025 00:47:42 +0000 (0:00:01.442) 0:00:28.773 *********** 2025-05-06 00:49:29.147389 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:49:29.147403 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:49:29.147435 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:49:29.147450 | orchestrator | 2025-05-06 00:49:29.147464 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-06 00:49:29.147478 | orchestrator | Tuesday 06 May 2025 00:47:42 +0000 (0:00:00.876) 0:00:29.650 *********** 2025-05-06 00:49:29.147492 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:49:29.147506 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:49:29.147520 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:49:29.147533 | orchestrator | 2025-05-06 00:49:29.147547 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-06 00:49:29.147561 | orchestrator | Tuesday 06 May 2025 00:47:48 +0000 (0:00:05.601) 0:00:35.251 *********** 2025-05-06 00:49:29.147575 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:49:29.147589 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:49:29.147602 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:49:29.147616 | orchestrator | 2025-05-06 00:49:29.147630 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-06 00:49:29.147644 | orchestrator | 2025-05-06 00:49:29.147658 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-06 00:49:29.147671 | orchestrator | Tuesday 06 May 2025 00:47:48 +0000 (0:00:00.394) 0:00:35.646 *********** 2025-05-06 00:49:29.147685 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:49:29.147706 | orchestrator | 2025-05-06 00:49:29.147720 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-06 00:49:29.147734 | orchestrator | Tuesday 06 May 2025 00:47:49 +0000 (0:00:00.767) 0:00:36.414 *********** 2025-05-06 00:49:29.147748 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:49:29.147761 | orchestrator | 2025-05-06 00:49:29.147775 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-06 00:49:29.147789 | orchestrator | Tuesday 06 May 2025 00:47:49 +0000 (0:00:00.226) 0:00:36.641 *********** 2025-05-06 00:49:29.147802 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:49:29.147816 | orchestrator | 2025-05-06 00:49:29.147830 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-06 00:49:29.147843 | orchestrator | Tuesday 06 May 2025 00:47:51 +0000 (0:00:01.732) 0:00:38.374 *********** 2025-05-06 00:49:29.147857 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:49:29.147871 | orchestrator | 2025-05-06 00:49:29.147885 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-06 00:49:29.147898 | orchestrator | 2025-05-06 00:49:29.147912 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-06 00:49:29.147926 | orchestrator | Tuesday 06 May 2025 00:48:46 +0000 (0:00:55.245) 0:01:33.620 *********** 2025-05-06 00:49:29.147940 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:49:29.147953 | orchestrator | 2025-05-06 00:49:29.147967 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-06 00:49:29.147981 | orchestrator | Tuesday 06 May 2025 00:48:47 +0000 (0:00:00.570) 0:01:34.190 *********** 2025-05-06 00:49:29.147995 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:49:29.148008 | orchestrator | 2025-05-06 00:49:29.148022 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-06 00:49:29.148036 | orchestrator | Tuesday 06 May 2025 00:48:47 +0000 (0:00:00.207) 0:01:34.398 *********** 2025-05-06 00:49:29.148050 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:49:29.148063 | orchestrator | 2025-05-06 00:49:29.148077 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-06 00:49:29.148091 | orchestrator | Tuesday 06 May 2025 00:48:55 +0000 (0:00:07.476) 0:01:41.875 *********** 2025-05-06 00:49:29.148104 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:49:29.148124 | orchestrator | 2025-05-06 00:49:29.148138 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-06 00:49:29.148152 | orchestrator | 2025-05-06 00:49:29.148166 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-06 00:49:29.148180 | orchestrator | Tuesday 06 May 2025 00:49:05 +0000 (0:00:10.122) 0:01:51.998 *********** 2025-05-06 00:49:29.148194 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:49:29.148208 | orchestrator | 2025-05-06 00:49:29.148226 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-06 00:49:29.148240 | orchestrator | Tuesday 06 May 2025 00:49:05 +0000 (0:00:00.599) 0:01:52.598 *********** 2025-05-06 00:49:29.148254 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:49:29.148268 | orchestrator | 2025-05-06 00:49:29.148282 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-06 00:49:29.148303 | orchestrator | Tuesday 06 May 2025 00:49:06 +0000 (0:00:00.327) 0:01:52.925 *********** 2025-05-06 00:49:29.148317 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:49:29.148331 | orchestrator | 2025-05-06 00:49:29.148345 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-06 00:49:29.148359 | orchestrator | Tuesday 06 May 2025 00:49:08 +0000 (0:00:02.348) 0:01:55.274 *********** 2025-05-06 00:49:29.148372 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:49:29.148386 | orchestrator | 2025-05-06 00:49:29.148400 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-06 00:49:29.148429 | orchestrator | 2025-05-06 00:49:29.148444 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-06 00:49:29.148457 | orchestrator | Tuesday 06 May 2025 00:49:22 +0000 (0:00:14.071) 0:02:09.345 *********** 2025-05-06 00:49:29.148478 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:49:29.148492 | orchestrator | 2025-05-06 00:49:29.148505 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-06 00:49:29.148519 | orchestrator | Tuesday 06 May 2025 00:49:23 +0000 (0:00:00.600) 0:02:09.946 *********** 2025-05-06 00:49:29.148533 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-06 00:49:29.148547 | orchestrator | enable_outward_rabbitmq_True 2025-05-06 00:49:29.148561 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-06 00:49:29.148575 | orchestrator | outward_rabbitmq_restart 2025-05-06 00:49:29.148589 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:49:29.148603 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:49:29.148617 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:49:29.148631 | orchestrator | 2025-05-06 00:49:29.148645 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-06 00:49:29.148658 | orchestrator | skipping: no hosts matched 2025-05-06 00:49:29.148672 | orchestrator | 2025-05-06 00:49:29.148686 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-06 00:49:29.148700 | orchestrator | skipping: no hosts matched 2025-05-06 00:49:29.148713 | orchestrator | 2025-05-06 00:49:29.148727 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-06 00:49:29.148741 | orchestrator | skipping: no hosts matched 2025-05-06 00:49:29.148754 | orchestrator | 2025-05-06 00:49:29.148768 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:49:29.148782 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-06 00:49:29.148796 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-06 00:49:29.148811 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:49:29.148825 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 00:49:29.148839 | orchestrator | 2025-05-06 00:49:29.148852 | orchestrator | 2025-05-06 00:49:29.148866 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 00:49:29.148880 | orchestrator | Tuesday 06 May 2025 00:49:26 +0000 (0:00:02.772) 0:02:12.718 *********** 2025-05-06 00:49:29.148894 | orchestrator | =============================================================================== 2025-05-06 00:49:29.148907 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 79.44s 2025-05-06 00:49:29.148921 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 11.56s 2025-05-06 00:49:29.148935 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 5.60s 2025-05-06 00:49:29.148949 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.89s 2025-05-06 00:49:29.148962 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.77s 2025-05-06 00:49:29.148976 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.29s 2025-05-06 00:49:29.148990 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.99s 2025-05-06 00:49:29.149003 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.95s 2025-05-06 00:49:29.149017 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.94s 2025-05-06 00:49:29.149031 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.87s 2025-05-06 00:49:29.149045 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.75s 2025-05-06 00:49:29.149059 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.72s 2025-05-06 00:49:29.149078 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.70s 2025-05-06 00:49:29.149097 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.44s 2025-05-06 00:49:29.149111 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 1.42s 2025-05-06 00:49:29.149125 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.22s 2025-05-06 00:49:29.149138 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.14s 2025-05-06 00:49:29.149152 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.03s 2025-05-06 00:49:29.149166 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.93s 2025-05-06 00:49:29.149180 | orchestrator | rabbitmq : Check if running RabbitMQ is at most one version behind ------ 0.89s 2025-05-06 00:49:29.149199 | orchestrator | 2025-05-06 00:49:29 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:49:32.191915 | orchestrator | 2025-05-06 00:49:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:49:32.192060 | orchestrator | 2025-05-06 00:49:32 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:49:32.193050 | orchestrator | 2025-05-06 00:49:32 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:49:32.195105 | orchestrator | 2025-05-06 00:49:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:49:32.197235 | orchestrator | 2025-05-06 00:49:32 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:49:35.246788 | orchestrator | 2025-05-06 00:49:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:49:35.246942 | orchestrator | 2025-05-06 00:49:35 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:49:35.247710 | orchestrator | 2025-05-06 00:49:35 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:49:35.247755 | orchestrator | 2025-05-06 00:49:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:49:35.251790 | orchestrator | 2025-05-06 00:49:35 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:49:38.308132 | orchestrator | 2025-05-06 00:49:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:49:38.308284 | orchestrator | 2025-05-06 00:49:38 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:49:38.308702 | orchestrator | 2025-05-06 00:49:38 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:49:38.310785 | orchestrator | 2025-05-06 00:49:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:49:38.313236 | orchestrator | 2025-05-06 00:49:38 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:49:38.313599 | orchestrator | 2025-05-06 00:49:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:49:41.349059 | orchestrator | 2025-05-06 00:49:41 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:49:41.349679 | orchestrator | 2025-05-06 00:49:41 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:49:41.350906 | orchestrator | 2025-05-06 00:49:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:49:41.352383 | orchestrator | 2025-05-06 00:49:41 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:49:44.412968 | orchestrator | 2025-05-06 00:49:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:49:44.413169 | orchestrator | 2025-05-06 00:49:44 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:49:44.415944 | orchestrator | 2025-05-06 00:49:44 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:49:44.418377 | orchestrator | 2025-05-06 00:49:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:49:44.420452 | orchestrator | 2025-05-06 00:49:44 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:49:47.468606 | orchestrator | 2025-05-06 00:49:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:49:47.468738 | orchestrator | 2025-05-06 00:49:47 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:49:47.469340 | orchestrator | 2025-05-06 00:49:47 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:49:47.469375 | orchestrator | 2025-05-06 00:49:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:49:47.469901 | orchestrator | 2025-05-06 00:49:47 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:49:50.514713 | orchestrator | 2025-05-06 00:49:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:49:50.514843 | orchestrator | 2025-05-06 00:49:50 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:49:50.516398 | orchestrator | 2025-05-06 00:49:50 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:49:50.518606 | orchestrator | 2025-05-06 00:49:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:49:50.520189 | orchestrator | 2025-05-06 00:49:50 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:49:53.571629 | orchestrator | 2025-05-06 00:49:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:49:53.571770 | orchestrator | 2025-05-06 00:49:53 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:49:53.574526 | orchestrator | 2025-05-06 00:49:53 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:49:56.633537 | orchestrator | 2025-05-06 00:49:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:49:56.633672 | orchestrator | 2025-05-06 00:49:53 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:49:56.633692 | orchestrator | 2025-05-06 00:49:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:49:56.633726 | orchestrator | 2025-05-06 00:49:56 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:49:56.635838 | orchestrator | 2025-05-06 00:49:56 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:49:56.637343 | orchestrator | 2025-05-06 00:49:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:49:56.638818 | orchestrator | 2025-05-06 00:49:56 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:49:59.677715 | orchestrator | 2025-05-06 00:49:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:49:59.677860 | orchestrator | 2025-05-06 00:49:59 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:49:59.678859 | orchestrator | 2025-05-06 00:49:59 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:49:59.678906 | orchestrator | 2025-05-06 00:49:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:49:59.679526 | orchestrator | 2025-05-06 00:49:59 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:50:02.719618 | orchestrator | 2025-05-06 00:49:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:50:02.719762 | orchestrator | 2025-05-06 00:50:02 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:50:02.720635 | orchestrator | 2025-05-06 00:50:02 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:50:02.720898 | orchestrator | 2025-05-06 00:50:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:50:02.721929 | orchestrator | 2025-05-06 00:50:02 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:50:05.765322 | orchestrator | 2025-05-06 00:50:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:50:05.765486 | orchestrator | 2025-05-06 00:50:05 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:50:05.765579 | orchestrator | 2025-05-06 00:50:05 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:50:05.766474 | orchestrator | 2025-05-06 00:50:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:50:05.766983 | orchestrator | 2025-05-06 00:50:05 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:50:08.810547 | orchestrator | 2025-05-06 00:50:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:50:08.810697 | orchestrator | 2025-05-06 00:50:08 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:50:08.812061 | orchestrator | 2025-05-06 00:50:08 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:50:08.815579 | orchestrator | 2025-05-06 00:50:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:50:08.818304 | orchestrator | 2025-05-06 00:50:08 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:50:08.818610 | orchestrator | 2025-05-06 00:50:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:50:11.854276 | orchestrator | 2025-05-06 00:50:11 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:50:11.856869 | orchestrator | 2025-05-06 00:50:11 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:50:11.858592 | orchestrator | 2025-05-06 00:50:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:50:11.860716 | orchestrator | 2025-05-06 00:50:11 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:50:11.861012 | orchestrator | 2025-05-06 00:50:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:50:14.911511 | orchestrator | 2025-05-06 00:50:14 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:50:14.913873 | orchestrator | 2025-05-06 00:50:14 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:50:14.915471 | orchestrator | 2025-05-06 00:50:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:50:14.917367 | orchestrator | 2025-05-06 00:50:14 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:50:17.964262 | orchestrator | 2025-05-06 00:50:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:50:17.964447 | orchestrator | 2025-05-06 00:50:17 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:50:20.996178 | orchestrator | 2025-05-06 00:50:17 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:50:20.996369 | orchestrator | 2025-05-06 00:50:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:50:20.996396 | orchestrator | 2025-05-06 00:50:17 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:50:20.996411 | orchestrator | 2025-05-06 00:50:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:50:20.996441 | orchestrator | 2025-05-06 00:50:20 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:50:20.997752 | orchestrator | 2025-05-06 00:50:20 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:50:20.999760 | orchestrator | 2025-05-06 00:50:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:50:21.000491 | orchestrator | 2025-05-06 00:50:21 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:50:21.001704 | orchestrator | 2025-05-06 00:50:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:50:24.031979 | orchestrator | 2025-05-06 00:50:24 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:50:24.032189 | orchestrator | 2025-05-06 00:50:24 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:50:24.032813 | orchestrator | 2025-05-06 00:50:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:50:24.033292 | orchestrator | 2025-05-06 00:50:24 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:50:27.080445 | orchestrator | 2025-05-06 00:50:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:50:27.080598 | orchestrator | 2025-05-06 00:50:27 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:50:27.082730 | orchestrator | 2025-05-06 00:50:27 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:50:27.082767 | orchestrator | 2025-05-06 00:50:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:50:27.084975 | orchestrator | 2025-05-06 00:50:27 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:50:27.085099 | orchestrator | 2025-05-06 00:50:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:50:30.140435 | orchestrator | 2025-05-06 00:50:30 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:50:30.143574 | orchestrator | 2025-05-06 00:50:30 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:50:30.144762 | orchestrator | 2025-05-06 00:50:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:50:30.146871 | orchestrator | 2025-05-06 00:50:30 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:50:33.180493 | orchestrator | 2025-05-06 00:50:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:50:33.180639 | orchestrator | 2025-05-06 00:50:33 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:50:33.182491 | orchestrator | 2025-05-06 00:50:33 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:50:33.183821 | orchestrator | 2025-05-06 00:50:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:50:33.185514 | orchestrator | 2025-05-06 00:50:33 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state STARTED 2025-05-06 00:50:36.222396 | orchestrator | 2025-05-06 00:50:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:50:36.222648 | orchestrator | 2025-05-06 00:50:36 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:50:36.222756 | orchestrator | 2025-05-06 00:50:36 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:50:36.223971 | orchestrator | 2025-05-06 00:50:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:50:36.225057 | orchestrator | 2025-05-06 00:50:36 | INFO  | Task 33238d44-2989-4fb5-9e22-918d6d67bde2 is in state SUCCESS 2025-05-06 00:50:36.225891 | orchestrator | 2025-05-06 00:50:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:50:36.228139 | orchestrator | 2025-05-06 00:50:36.228327 | orchestrator | 2025-05-06 00:50:36.228386 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 00:50:36.228403 | orchestrator | 2025-05-06 00:50:36.228417 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 00:50:36.228431 | orchestrator | Tuesday 06 May 2025 00:48:13 +0000 (0:00:00.222) 0:00:00.222 *********** 2025-05-06 00:50:36.228446 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:50:36.228461 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:50:36.228475 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:50:36.228489 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:50:36.228503 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:50:36.228517 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:50:36.228531 | orchestrator | 2025-05-06 00:50:36.228545 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 00:50:36.228559 | orchestrator | Tuesday 06 May 2025 00:48:14 +0000 (0:00:00.807) 0:00:01.030 *********** 2025-05-06 00:50:36.228573 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-06 00:50:36.228587 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-06 00:50:36.228601 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-06 00:50:36.228650 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-06 00:50:36.228667 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-06 00:50:36.228681 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-06 00:50:36.228695 | orchestrator | 2025-05-06 00:50:36.228709 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-06 00:50:36.228723 | orchestrator | 2025-05-06 00:50:36.228736 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-06 00:50:36.228750 | orchestrator | Tuesday 06 May 2025 00:48:15 +0000 (0:00:01.280) 0:00:02.310 *********** 2025-05-06 00:50:36.228765 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:50:36.228781 | orchestrator | 2025-05-06 00:50:36.228795 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-06 00:50:36.228808 | orchestrator | Tuesday 06 May 2025 00:48:17 +0000 (0:00:02.267) 0:00:04.578 *********** 2025-05-06 00:50:36.228823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.228841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.228855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.228887 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.228902 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.228928 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.228943 | orchestrator | 2025-05-06 00:50:36.228957 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-06 00:50:36.228972 | orchestrator | Tuesday 06 May 2025 00:48:18 +0000 (0:00:01.138) 0:00:05.716 *********** 2025-05-06 00:50:36.229001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229029 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229058 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229079 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229093 | orchestrator | 2025-05-06 00:50:36.229107 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-06 00:50:36.229120 | orchestrator | Tuesday 06 May 2025 00:48:20 +0000 (0:00:01.815) 0:00:07.532 *********** 2025-05-06 00:50:36.229134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229196 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229210 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229224 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229238 | orchestrator | 2025-05-06 00:50:36.229252 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-06 00:50:36.229266 | orchestrator | Tuesday 06 May 2025 00:48:21 +0000 (0:00:00.973) 0:00:08.505 *********** 2025-05-06 00:50:36.229320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229364 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229378 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229404 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229420 | orchestrator | 2025-05-06 00:50:36.229434 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-06 00:50:36.229448 | orchestrator | Tuesday 06 May 2025 00:48:23 +0000 (0:00:01.571) 0:00:10.076 *********** 2025-05-06 00:50:36.229461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229526 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229540 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.229554 | orchestrator | 2025-05-06 00:50:36.229568 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-06 00:50:36.229595 | orchestrator | Tuesday 06 May 2025 00:48:24 +0000 (0:00:01.178) 0:00:11.254 *********** 2025-05-06 00:50:36.229620 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:50:36.229648 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:50:36.229673 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:50:36.229701 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:50:36.229727 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:50:36.229753 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:50:36.229781 | orchestrator | 2025-05-06 00:50:36.229809 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-06 00:50:36.229837 | orchestrator | Tuesday 06 May 2025 00:48:27 +0000 (0:00:02.732) 0:00:13.987 *********** 2025-05-06 00:50:36.229865 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-06 00:50:36.229881 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-06 00:50:36.229895 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-06 00:50:36.229916 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-06 00:50:36.229937 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-06 00:50:36.229957 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-06 00:50:36.229982 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-06 00:50:36.230007 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-06 00:50:36.230080 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-06 00:50:36.230102 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-06 00:50:36.230116 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-06 00:50:36.230139 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-06 00:50:36.230153 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-06 00:50:36.230169 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-06 00:50:36.230184 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-06 00:50:36.230198 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-06 00:50:36.230212 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-06 00:50:36.230226 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-06 00:50:36.230240 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-06 00:50:36.230254 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-06 00:50:36.230268 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-06 00:50:36.230282 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-06 00:50:36.230317 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-06 00:50:36.230331 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-06 00:50:36.230354 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-06 00:50:36.230379 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-06 00:50:36.230403 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-06 00:50:36.230423 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-06 00:50:36.230437 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-06 00:50:36.230451 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-06 00:50:36.230465 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-06 00:50:36.230480 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-06 00:50:36.230494 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-06 00:50:36.230507 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-06 00:50:36.230521 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-06 00:50:36.230535 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-06 00:50:36.230549 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-06 00:50:36.230563 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-06 00:50:36.230577 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-06 00:50:36.230591 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-06 00:50:36.230619 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-06 00:50:36.230640 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-06 00:50:36.230654 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-06 00:50:36.230668 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-06 00:50:36.230682 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-06 00:50:36.230696 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-06 00:50:36.230710 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-06 00:50:36.230724 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-06 00:50:36.230738 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-06 00:50:36.230752 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-06 00:50:36.230766 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-06 00:50:36.230780 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-06 00:50:36.230793 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-06 00:50:36.230807 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-06 00:50:36.230821 | orchestrator | 2025-05-06 00:50:36.230835 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-06 00:50:36.230849 | orchestrator | Tuesday 06 May 2025 00:48:46 +0000 (0:00:19.833) 0:00:33.820 *********** 2025-05-06 00:50:36.230862 | orchestrator | 2025-05-06 00:50:36.230876 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-06 00:50:36.230890 | orchestrator | Tuesday 06 May 2025 00:48:47 +0000 (0:00:00.059) 0:00:33.879 *********** 2025-05-06 00:50:36.230904 | orchestrator | 2025-05-06 00:50:36.230918 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-06 00:50:36.230931 | orchestrator | Tuesday 06 May 2025 00:48:47 +0000 (0:00:00.284) 0:00:34.164 *********** 2025-05-06 00:50:36.230944 | orchestrator | 2025-05-06 00:50:36.230958 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-06 00:50:36.230971 | orchestrator | Tuesday 06 May 2025 00:48:47 +0000 (0:00:00.057) 0:00:34.221 *********** 2025-05-06 00:50:36.230985 | orchestrator | 2025-05-06 00:50:36.230999 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-06 00:50:36.231012 | orchestrator | Tuesday 06 May 2025 00:48:47 +0000 (0:00:00.053) 0:00:34.274 *********** 2025-05-06 00:50:36.231026 | orchestrator | 2025-05-06 00:50:36.231040 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-06 00:50:36.231053 | orchestrator | Tuesday 06 May 2025 00:48:47 +0000 (0:00:00.055) 0:00:34.330 *********** 2025-05-06 00:50:36.231067 | orchestrator | 2025-05-06 00:50:36.231080 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-06 00:50:36.231094 | orchestrator | Tuesday 06 May 2025 00:48:47 +0000 (0:00:00.297) 0:00:34.628 *********** 2025-05-06 00:50:36.231114 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:50:36.231128 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:50:36.231142 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:50:36.231156 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:50:36.231170 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:50:36.231183 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:50:36.231197 | orchestrator | 2025-05-06 00:50:36.231211 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-06 00:50:36.231225 | orchestrator | Tuesday 06 May 2025 00:48:50 +0000 (0:00:02.504) 0:00:37.132 *********** 2025-05-06 00:50:36.231239 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:50:36.231252 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:50:36.231266 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:50:36.231280 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:50:36.231350 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:50:36.231365 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:50:36.231379 | orchestrator | 2025-05-06 00:50:36.231393 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-06 00:50:36.231407 | orchestrator | 2025-05-06 00:50:36.231420 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-06 00:50:36.231434 | orchestrator | Tuesday 06 May 2025 00:49:14 +0000 (0:00:23.781) 0:01:00.914 *********** 2025-05-06 00:50:36.231448 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:50:36.231462 | orchestrator | 2025-05-06 00:50:36.231475 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-06 00:50:36.231489 | orchestrator | Tuesday 06 May 2025 00:49:14 +0000 (0:00:00.448) 0:01:01.363 *********** 2025-05-06 00:50:36.231503 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:50:36.231517 | orchestrator | 2025-05-06 00:50:36.231538 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-06 00:50:36.231563 | orchestrator | Tuesday 06 May 2025 00:49:15 +0000 (0:00:00.617) 0:01:01.980 *********** 2025-05-06 00:50:36.231578 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:50:36.231592 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:50:36.231606 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:50:36.231620 | orchestrator | 2025-05-06 00:50:36.231633 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-06 00:50:36.231647 | orchestrator | Tuesday 06 May 2025 00:49:15 +0000 (0:00:00.801) 0:01:02.781 *********** 2025-05-06 00:50:36.231661 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:50:36.231674 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:50:36.231688 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:50:36.231702 | orchestrator | 2025-05-06 00:50:36.231716 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-06 00:50:36.231729 | orchestrator | Tuesday 06 May 2025 00:49:16 +0000 (0:00:00.242) 0:01:03.024 *********** 2025-05-06 00:50:36.231742 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:50:36.231756 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:50:36.231769 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:50:36.231783 | orchestrator | 2025-05-06 00:50:36.231797 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-06 00:50:36.231810 | orchestrator | Tuesday 06 May 2025 00:49:16 +0000 (0:00:00.439) 0:01:03.464 *********** 2025-05-06 00:50:36.231824 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:50:36.231838 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:50:36.231851 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:50:36.231865 | orchestrator | 2025-05-06 00:50:36.231879 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-06 00:50:36.231892 | orchestrator | Tuesday 06 May 2025 00:49:17 +0000 (0:00:00.452) 0:01:03.917 *********** 2025-05-06 00:50:36.231906 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:50:36.231919 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:50:36.231932 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:50:36.231959 | orchestrator | 2025-05-06 00:50:36.231973 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-06 00:50:36.231987 | orchestrator | Tuesday 06 May 2025 00:49:17 +0000 (0:00:00.418) 0:01:04.335 *********** 2025-05-06 00:50:36.232001 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:50:36.232014 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.232028 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.232042 | orchestrator | 2025-05-06 00:50:36.232056 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-06 00:50:36.232069 | orchestrator | Tuesday 06 May 2025 00:49:18 +0000 (0:00:00.928) 0:01:05.264 *********** 2025-05-06 00:50:36.232083 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:50:36.232097 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.232111 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.232124 | orchestrator | 2025-05-06 00:50:36.232138 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-06 00:50:36.232152 | orchestrator | Tuesday 06 May 2025 00:49:19 +0000 (0:00:00.835) 0:01:06.100 *********** 2025-05-06 00:50:36.232165 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:50:36.232179 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.232193 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.232207 | orchestrator | 2025-05-06 00:50:36.232221 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-06 00:50:36.232234 | orchestrator | Tuesday 06 May 2025 00:49:19 +0000 (0:00:00.522) 0:01:06.622 *********** 2025-05-06 00:50:36.232248 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:50:36.232262 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.232275 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.232309 | orchestrator | 2025-05-06 00:50:36.232323 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-06 00:50:36.232338 | orchestrator | Tuesday 06 May 2025 00:49:20 +0000 (0:00:00.376) 0:01:06.998 *********** 2025-05-06 00:50:36.232351 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:50:36.232365 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.232379 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.232393 | orchestrator | 2025-05-06 00:50:36.232407 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-06 00:50:36.232421 | orchestrator | Tuesday 06 May 2025 00:49:20 +0000 (0:00:00.452) 0:01:07.450 *********** 2025-05-06 00:50:36.232435 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:50:36.232449 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.232463 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.232477 | orchestrator | 2025-05-06 00:50:36.232490 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-06 00:50:36.232504 | orchestrator | Tuesday 06 May 2025 00:49:20 +0000 (0:00:00.382) 0:01:07.833 *********** 2025-05-06 00:50:36.232518 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:50:36.232532 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.232546 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.232560 | orchestrator | 2025-05-06 00:50:36.232574 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-06 00:50:36.232588 | orchestrator | Tuesday 06 May 2025 00:49:21 +0000 (0:00:00.419) 0:01:08.252 *********** 2025-05-06 00:50:36.232602 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:50:36.232616 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.232630 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.232644 | orchestrator | 2025-05-06 00:50:36.232658 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-06 00:50:36.232671 | orchestrator | Tuesday 06 May 2025 00:49:21 +0000 (0:00:00.280) 0:01:08.533 *********** 2025-05-06 00:50:36.232685 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:50:36.232699 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.232713 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.232727 | orchestrator | 2025-05-06 00:50:36.232747 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-06 00:50:36.232762 | orchestrator | Tuesday 06 May 2025 00:49:22 +0000 (0:00:00.424) 0:01:08.957 *********** 2025-05-06 00:50:36.232776 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:50:36.232790 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.232804 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.232818 | orchestrator | 2025-05-06 00:50:36.232838 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-06 00:50:36.232852 | orchestrator | Tuesday 06 May 2025 00:49:22 +0000 (0:00:00.518) 0:01:09.476 *********** 2025-05-06 00:50:36.232866 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:50:36.232880 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.232893 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.232907 | orchestrator | 2025-05-06 00:50:36.232921 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-06 00:50:36.232940 | orchestrator | Tuesday 06 May 2025 00:49:23 +0000 (0:00:00.405) 0:01:09.881 *********** 2025-05-06 00:50:36.232954 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:50:36.232968 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.232982 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.232996 | orchestrator | 2025-05-06 00:50:36.233009 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-06 00:50:36.233023 | orchestrator | Tuesday 06 May 2025 00:49:23 +0000 (0:00:00.311) 0:01:10.193 *********** 2025-05-06 00:50:36.233037 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:50:36.233051 | orchestrator | 2025-05-06 00:50:36.233065 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-06 00:50:36.233078 | orchestrator | Tuesday 06 May 2025 00:49:24 +0000 (0:00:00.789) 0:01:10.982 *********** 2025-05-06 00:50:36.233092 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:50:36.233106 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:50:36.233119 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:50:36.233133 | orchestrator | 2025-05-06 00:50:36.233147 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-06 00:50:36.233160 | orchestrator | Tuesday 06 May 2025 00:49:24 +0000 (0:00:00.552) 0:01:11.535 *********** 2025-05-06 00:50:36.233174 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:50:36.233188 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:50:36.233201 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:50:36.233215 | orchestrator | 2025-05-06 00:50:36.233230 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-06 00:50:36.233253 | orchestrator | Tuesday 06 May 2025 00:49:25 +0000 (0:00:00.558) 0:01:12.094 *********** 2025-05-06 00:50:36.233277 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:50:36.233365 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.233391 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.233415 | orchestrator | 2025-05-06 00:50:36.233439 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-06 00:50:36.233462 | orchestrator | Tuesday 06 May 2025 00:49:25 +0000 (0:00:00.429) 0:01:12.523 *********** 2025-05-06 00:50:36.233480 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:50:36.233494 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.233508 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.233522 | orchestrator | 2025-05-06 00:50:36.233536 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-06 00:50:36.233550 | orchestrator | Tuesday 06 May 2025 00:49:26 +0000 (0:00:00.477) 0:01:13.001 *********** 2025-05-06 00:50:36.233563 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:50:36.233577 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.233591 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.233611 | orchestrator | 2025-05-06 00:50:36.233626 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-06 00:50:36.233649 | orchestrator | Tuesday 06 May 2025 00:49:26 +0000 (0:00:00.449) 0:01:13.450 *********** 2025-05-06 00:50:36.233663 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:50:36.233677 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.233691 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.233704 | orchestrator | 2025-05-06 00:50:36.233718 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-06 00:50:36.233732 | orchestrator | Tuesday 06 May 2025 00:49:27 +0000 (0:00:00.436) 0:01:13.886 *********** 2025-05-06 00:50:36.233746 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:50:36.233759 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.233773 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.233787 | orchestrator | 2025-05-06 00:50:36.233801 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-06 00:50:36.233854 | orchestrator | Tuesday 06 May 2025 00:49:27 +0000 (0:00:00.418) 0:01:14.305 *********** 2025-05-06 00:50:36.233869 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:50:36.233882 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.233894 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.233906 | orchestrator | 2025-05-06 00:50:36.233923 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-06 00:50:36.233945 | orchestrator | Tuesday 06 May 2025 00:49:27 +0000 (0:00:00.398) 0:01:14.704 *********** 2025-05-06 00:50:36.233969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.233994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234188 | orchestrator | 2025-05-06 00:50:36.234201 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-06 00:50:36.234213 | orchestrator | Tuesday 06 May 2025 00:49:29 +0000 (0:00:01.459) 0:01:16.164 *********** 2025-05-06 00:50:36.234225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234380 | orchestrator | 2025-05-06 00:50:36.234392 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-06 00:50:36.234405 | orchestrator | Tuesday 06 May 2025 00:49:33 +0000 (0:00:04.406) 0:01:20.570 *********** 2025-05-06 00:50:36.234417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.234577 | orchestrator | 2025-05-06 00:50:36.234590 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-06 00:50:36.234602 | orchestrator | Tuesday 06 May 2025 00:49:36 +0000 (0:00:02.394) 0:01:22.965 *********** 2025-05-06 00:50:36.234615 | orchestrator | 2025-05-06 00:50:36.234627 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-06 00:50:36.234639 | orchestrator | Tuesday 06 May 2025 00:49:36 +0000 (0:00:00.057) 0:01:23.023 *********** 2025-05-06 00:50:36.234656 | orchestrator | 2025-05-06 00:50:36.234668 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-06 00:50:36.234681 | orchestrator | Tuesday 06 May 2025 00:49:36 +0000 (0:00:00.053) 0:01:23.076 *********** 2025-05-06 00:50:36.234693 | orchestrator | 2025-05-06 00:50:36.234705 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-06 00:50:36.234725 | orchestrator | Tuesday 06 May 2025 00:49:36 +0000 (0:00:00.197) 0:01:23.274 *********** 2025-05-06 00:50:36.234738 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:50:36.234750 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:50:36.234763 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:50:36.234775 | orchestrator | 2025-05-06 00:50:36.234787 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-06 00:50:36.234799 | orchestrator | Tuesday 06 May 2025 00:49:38 +0000 (0:00:02.584) 0:01:25.858 *********** 2025-05-06 00:50:36.234812 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:50:36.234824 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:50:36.234836 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:50:36.234848 | orchestrator | 2025-05-06 00:50:36.234860 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-06 00:50:36.234872 | orchestrator | Tuesday 06 May 2025 00:49:46 +0000 (0:00:07.746) 0:01:33.605 *********** 2025-05-06 00:50:36.234884 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:50:36.234896 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:50:36.234908 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:50:36.234921 | orchestrator | 2025-05-06 00:50:36.234933 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-06 00:50:36.234945 | orchestrator | Tuesday 06 May 2025 00:49:54 +0000 (0:00:07.915) 0:01:41.521 *********** 2025-05-06 00:50:36.234957 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:50:36.234969 | orchestrator | 2025-05-06 00:50:36.234982 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-06 00:50:36.234994 | orchestrator | Tuesday 06 May 2025 00:49:54 +0000 (0:00:00.112) 0:01:41.633 *********** 2025-05-06 00:50:36.235012 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:50:36.235025 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:50:36.235037 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:50:36.235050 | orchestrator | 2025-05-06 00:50:36.235068 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-06 00:50:36.235080 | orchestrator | Tuesday 06 May 2025 00:49:55 +0000 (0:00:01.207) 0:01:42.841 *********** 2025-05-06 00:50:36.235093 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.235105 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.235117 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:50:36.235129 | orchestrator | 2025-05-06 00:50:36.235141 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-06 00:50:36.235153 | orchestrator | Tuesday 06 May 2025 00:49:56 +0000 (0:00:00.675) 0:01:43.516 *********** 2025-05-06 00:50:36.235165 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:50:36.235177 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:50:36.235189 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:50:36.235201 | orchestrator | 2025-05-06 00:50:36.235214 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-06 00:50:36.235226 | orchestrator | Tuesday 06 May 2025 00:49:57 +0000 (0:00:00.850) 0:01:44.366 *********** 2025-05-06 00:50:36.235238 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.235250 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.235262 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:50:36.235274 | orchestrator | 2025-05-06 00:50:36.235306 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-06 00:50:36.235319 | orchestrator | Tuesday 06 May 2025 00:49:58 +0000 (0:00:00.690) 0:01:45.056 *********** 2025-05-06 00:50:36.235331 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:50:36.235344 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:50:36.235356 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:50:36.235368 | orchestrator | 2025-05-06 00:50:36.235380 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-06 00:50:36.235392 | orchestrator | Tuesday 06 May 2025 00:49:59 +0000 (0:00:01.251) 0:01:46.308 *********** 2025-05-06 00:50:36.235404 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:50:36.235416 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:50:36.235428 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:50:36.235440 | orchestrator | 2025-05-06 00:50:36.235452 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-06 00:50:36.235465 | orchestrator | Tuesday 06 May 2025 00:50:00 +0000 (0:00:00.724) 0:01:47.033 *********** 2025-05-06 00:50:36.235477 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:50:36.235489 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:50:36.235501 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:50:36.235513 | orchestrator | 2025-05-06 00:50:36.235525 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-06 00:50:36.235537 | orchestrator | Tuesday 06 May 2025 00:50:00 +0000 (0:00:00.415) 0:01:47.448 *********** 2025-05-06 00:50:36.235550 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.235563 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.235576 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.235595 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.235608 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.235620 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.235639 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.235652 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.235664 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.235677 | orchestrator | 2025-05-06 00:50:36.235689 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-06 00:50:36.235702 | orchestrator | Tuesday 06 May 2025 00:50:02 +0000 (0:00:01.648) 0:01:49.097 *********** 2025-05-06 00:50:36.235714 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.235727 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.235739 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.235762 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.235775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.235795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.235834 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.235851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.235864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.235877 | orchestrator | 2025-05-06 00:50:36.235891 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-06 00:50:36.235913 | orchestrator | Tuesday 06 May 2025 00:50:06 +0000 (0:00:04.333) 0:01:53.430 *********** 2025-05-06 00:50:36.235935 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.235957 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.235978 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.235991 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.236009 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.236022 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.236038 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.236058 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.236072 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 00:50:36.236084 | orchestrator | 2025-05-06 00:50:36.236097 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-06 00:50:36.236109 | orchestrator | Tuesday 06 May 2025 00:50:09 +0000 (0:00:03.216) 0:01:56.647 *********** 2025-05-06 00:50:36.236121 | orchestrator | 2025-05-06 00:50:36.236134 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-06 00:50:36.236146 | orchestrator | Tuesday 06 May 2025 00:50:09 +0000 (0:00:00.203) 0:01:56.851 *********** 2025-05-06 00:50:36.236159 | orchestrator | 2025-05-06 00:50:36.236171 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-06 00:50:36.236183 | orchestrator | Tuesday 06 May 2025 00:50:10 +0000 (0:00:00.068) 0:01:56.919 *********** 2025-05-06 00:50:36.236196 | orchestrator | 2025-05-06 00:50:36.236208 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-06 00:50:36.236221 | orchestrator | Tuesday 06 May 2025 00:50:10 +0000 (0:00:00.110) 0:01:57.029 *********** 2025-05-06 00:50:36.236233 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:50:36.236250 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:50:36.236263 | orchestrator | 2025-05-06 00:50:36.236275 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-06 00:50:36.236306 | orchestrator | Tuesday 06 May 2025 00:50:16 +0000 (0:00:06.840) 0:02:03.870 *********** 2025-05-06 00:50:36.236319 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:50:36.236332 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:50:36.236344 | orchestrator | 2025-05-06 00:50:36.236357 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-06 00:50:36.236369 | orchestrator | Tuesday 06 May 2025 00:50:23 +0000 (0:00:06.568) 0:02:10.439 *********** 2025-05-06 00:50:36.236381 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:50:36.236393 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:50:36.236405 | orchestrator | 2025-05-06 00:50:36.236418 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-06 00:50:36.236430 | orchestrator | Tuesday 06 May 2025 00:50:29 +0000 (0:00:06.248) 0:02:16.687 *********** 2025-05-06 00:50:36.236442 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:50:36.236454 | orchestrator | 2025-05-06 00:50:36.236466 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-06 00:50:36.236478 | orchestrator | Tuesday 06 May 2025 00:50:30 +0000 (0:00:00.308) 0:02:16.995 *********** 2025-05-06 00:50:36.236490 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:50:36.236503 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:50:36.236515 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:50:36.236527 | orchestrator | 2025-05-06 00:50:36.236539 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-06 00:50:36.236551 | orchestrator | Tuesday 06 May 2025 00:50:30 +0000 (0:00:00.815) 0:02:17.810 *********** 2025-05-06 00:50:36.236563 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.236576 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.236596 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:50:36.236609 | orchestrator | 2025-05-06 00:50:36.236622 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-06 00:50:36.236634 | orchestrator | Tuesday 06 May 2025 00:50:31 +0000 (0:00:00.673) 0:02:18.484 *********** 2025-05-06 00:50:36.236647 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:50:36.236659 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:50:36.236671 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:50:36.236683 | orchestrator | 2025-05-06 00:50:36.236696 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-06 00:50:36.236708 | orchestrator | Tuesday 06 May 2025 00:50:32 +0000 (0:00:01.009) 0:02:19.494 *********** 2025-05-06 00:50:36.236720 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:50:36.236733 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:50:36.236745 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:50:36.236757 | orchestrator | 2025-05-06 00:50:36.236769 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-06 00:50:36.236794 | orchestrator | Tuesday 06 May 2025 00:50:33 +0000 (0:00:00.757) 0:02:20.251 *********** 2025-05-06 00:50:36.236807 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:50:36.236819 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:50:36.236831 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:50:36.236844 | orchestrator | 2025-05-06 00:50:36.236856 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-06 00:50:36.236868 | orchestrator | Tuesday 06 May 2025 00:50:34 +0000 (0:00:00.776) 0:02:21.028 *********** 2025-05-06 00:50:36.236880 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:50:36.236892 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:50:36.236904 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:50:36.236916 | orchestrator | 2025-05-06 00:50:36.236928 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:50:36.236941 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-06 00:50:36.236960 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-06 00:50:36.236992 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-06 00:50:39.270844 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:50:39.271007 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:50:39.271030 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 00:50:39.271046 | orchestrator | 2025-05-06 00:50:39.271060 | orchestrator | 2025-05-06 00:50:39.271075 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 00:50:39.271091 | orchestrator | Tuesday 06 May 2025 00:50:35 +0000 (0:00:01.416) 0:02:22.444 *********** 2025-05-06 00:50:39.271105 | orchestrator | =============================================================================== 2025-05-06 00:50:39.271119 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 23.78s 2025-05-06 00:50:39.271133 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.83s 2025-05-06 00:50:39.271146 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.32s 2025-05-06 00:50:39.271160 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.16s 2025-05-06 00:50:39.271174 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.43s 2025-05-06 00:50:39.271188 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.41s 2025-05-06 00:50:39.271211 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.33s 2025-05-06 00:50:39.271226 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.22s 2025-05-06 00:50:39.271240 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.73s 2025-05-06 00:50:39.271253 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.50s 2025-05-06 00:50:39.271267 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.39s 2025-05-06 00:50:39.271317 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.27s 2025-05-06 00:50:39.271332 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.82s 2025-05-06 00:50:39.271347 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.65s 2025-05-06 00:50:39.271364 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.57s 2025-05-06 00:50:39.271379 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.46s 2025-05-06 00:50:39.271395 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.42s 2025-05-06 00:50:39.271411 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.28s 2025-05-06 00:50:39.271427 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.25s 2025-05-06 00:50:39.271443 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.21s 2025-05-06 00:50:39.271476 | orchestrator | 2025-05-06 00:50:39 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:50:39.272218 | orchestrator | 2025-05-06 00:50:39 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:50:39.274758 | orchestrator | 2025-05-06 00:50:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:50:39.275204 | orchestrator | 2025-05-06 00:50:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:50:42.314816 | orchestrator | 2025-05-06 00:50:42 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:50:42.317261 | orchestrator | 2025-05-06 00:50:42 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:50:42.317951 | orchestrator | 2025-05-06 00:50:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:50:45.375388 | orchestrator | 2025-05-06 00:50:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:50:45.375533 | orchestrator | 2025-05-06 00:50:45 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:50:45.375741 | orchestrator | 2025-05-06 00:50:45 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:50:45.376255 | orchestrator | 2025-05-06 00:50:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:50:48.418206 | orchestrator | 2025-05-06 00:50:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:50:48.418383 | orchestrator | 2025-05-06 00:50:48 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:50:48.419105 | orchestrator | 2025-05-06 00:50:48 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:50:48.419142 | orchestrator | 2025-05-06 00:50:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:50:48.419592 | orchestrator | 2025-05-06 00:50:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:50:51.476834 | orchestrator | 2025-05-06 00:50:51 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:50:51.478189 | orchestrator | 2025-05-06 00:50:51 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:50:51.480043 | orchestrator | 2025-05-06 00:50:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:50:54.539583 | orchestrator | 2025-05-06 00:50:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:50:54.539723 | orchestrator | 2025-05-06 00:50:54 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:50:54.541307 | orchestrator | 2025-05-06 00:50:54 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:50:54.544359 | orchestrator | 2025-05-06 00:50:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:50:57.600733 | orchestrator | 2025-05-06 00:50:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:50:57.600881 | orchestrator | 2025-05-06 00:50:57 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:50:57.602435 | orchestrator | 2025-05-06 00:50:57 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:50:57.604427 | orchestrator | 2025-05-06 00:50:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:51:00.657848 | orchestrator | 2025-05-06 00:50:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:51:00.658001 | orchestrator | 2025-05-06 00:51:00 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:51:00.658922 | orchestrator | 2025-05-06 00:51:00 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:51:00.660335 | orchestrator | 2025-05-06 00:51:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:51:03.711328 | orchestrator | 2025-05-06 00:51:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:51:03.711478 | orchestrator | 2025-05-06 00:51:03 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:51:03.713066 | orchestrator | 2025-05-06 00:51:03 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:51:03.715059 | orchestrator | 2025-05-06 00:51:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:51:06.768500 | orchestrator | 2025-05-06 00:51:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:51:06.768749 | orchestrator | 2025-05-06 00:51:06 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:51:06.768855 | orchestrator | 2025-05-06 00:51:06 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:51:06.769679 | orchestrator | 2025-05-06 00:51:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:51:09.816620 | orchestrator | 2025-05-06 00:51:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:51:09.816764 | orchestrator | 2025-05-06 00:51:09 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:51:09.818483 | orchestrator | 2025-05-06 00:51:09 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:51:09.820761 | orchestrator | 2025-05-06 00:51:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:51:12.862697 | orchestrator | 2025-05-06 00:51:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:51:12.862789 | orchestrator | 2025-05-06 00:51:12 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:51:12.864509 | orchestrator | 2025-05-06 00:51:12 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:51:12.866514 | orchestrator | 2025-05-06 00:51:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:51:12.866813 | orchestrator | 2025-05-06 00:51:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:51:15.918715 | orchestrator | 2025-05-06 00:51:15 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:51:15.920851 | orchestrator | 2025-05-06 00:51:15 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:51:15.923728 | orchestrator | 2025-05-06 00:51:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:51:15.924325 | orchestrator | 2025-05-06 00:51:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:51:18.994815 | orchestrator | 2025-05-06 00:51:18 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:51:18.997615 | orchestrator | 2025-05-06 00:51:18 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:51:18.999736 | orchestrator | 2025-05-06 00:51:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:51:22.063417 | orchestrator | 2025-05-06 00:51:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:51:22.063523 | orchestrator | 2025-05-06 00:51:22 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:51:22.065518 | orchestrator | 2025-05-06 00:51:22 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:51:22.066265 | orchestrator | 2025-05-06 00:51:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:51:22.066384 | orchestrator | 2025-05-06 00:51:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:51:25.118671 | orchestrator | 2025-05-06 00:51:25 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:51:25.120579 | orchestrator | 2025-05-06 00:51:25 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:51:25.123172 | orchestrator | 2025-05-06 00:51:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:51:28.174669 | orchestrator | 2025-05-06 00:51:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:51:28.174816 | orchestrator | 2025-05-06 00:51:28 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:51:28.174898 | orchestrator | 2025-05-06 00:51:28 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:51:28.174924 | orchestrator | 2025-05-06 00:51:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:51:31.208500 | orchestrator | 2025-05-06 00:51:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:51:31.208620 | orchestrator | 2025-05-06 00:51:31 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:51:31.209173 | orchestrator | 2025-05-06 00:51:31 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:51:31.209215 | orchestrator | 2025-05-06 00:51:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:51:31.209403 | orchestrator | 2025-05-06 00:51:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:51:34.245954 | orchestrator | 2025-05-06 00:51:34 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:51:34.246229 | orchestrator | 2025-05-06 00:51:34 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:51:34.246570 | orchestrator | 2025-05-06 00:51:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:51:37.284908 | orchestrator | 2025-05-06 00:51:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:51:37.285029 | orchestrator | 2025-05-06 00:51:37 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:51:37.287410 | orchestrator | 2025-05-06 00:51:37 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:51:40.313552 | orchestrator | 2025-05-06 00:51:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:51:40.313686 | orchestrator | 2025-05-06 00:51:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:51:40.313724 | orchestrator | 2025-05-06 00:51:40 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:51:40.314138 | orchestrator | 2025-05-06 00:51:40 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:51:40.314201 | orchestrator | 2025-05-06 00:51:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:51:43.361395 | orchestrator | 2025-05-06 00:51:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:51:43.361530 | orchestrator | 2025-05-06 00:51:43 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:51:43.362278 | orchestrator | 2025-05-06 00:51:43 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:51:43.364803 | orchestrator | 2025-05-06 00:51:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:51:43.365009 | orchestrator | 2025-05-06 00:51:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:51:46.399726 | orchestrator | 2025-05-06 00:51:46 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:51:49.446897 | orchestrator | 2025-05-06 00:51:46 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:51:49.447059 | orchestrator | 2025-05-06 00:51:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:51:49.447080 | orchestrator | 2025-05-06 00:51:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:51:49.447116 | orchestrator | 2025-05-06 00:51:49 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:51:49.452280 | orchestrator | 2025-05-06 00:51:49 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:51:49.452328 | orchestrator | 2025-05-06 00:51:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:51:52.488414 | orchestrator | 2025-05-06 00:51:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:51:52.488538 | orchestrator | 2025-05-06 00:51:52 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:51:52.488690 | orchestrator | 2025-05-06 00:51:52 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:51:52.493343 | orchestrator | 2025-05-06 00:51:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:51:55.532115 | orchestrator | 2025-05-06 00:51:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:51:55.532259 | orchestrator | 2025-05-06 00:51:55 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:51:55.533203 | orchestrator | 2025-05-06 00:51:55 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:51:55.534812 | orchestrator | 2025-05-06 00:51:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:51:58.566283 | orchestrator | 2025-05-06 00:51:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:51:58.566427 | orchestrator | 2025-05-06 00:51:58 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:51:58.570360 | orchestrator | 2025-05-06 00:51:58 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:51:58.571274 | orchestrator | 2025-05-06 00:51:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:51:58.571613 | orchestrator | 2025-05-06 00:51:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:52:01.619869 | orchestrator | 2025-05-06 00:52:01 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:52:01.620175 | orchestrator | 2025-05-06 00:52:01 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:52:01.621117 | orchestrator | 2025-05-06 00:52:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:52:04.673318 | orchestrator | 2025-05-06 00:52:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:52:04.673462 | orchestrator | 2025-05-06 00:52:04 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:52:04.674520 | orchestrator | 2025-05-06 00:52:04 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:52:04.676998 | orchestrator | 2025-05-06 00:52:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:52:04.677354 | orchestrator | 2025-05-06 00:52:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:52:07.728252 | orchestrator | 2025-05-06 00:52:07 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:52:07.729086 | orchestrator | 2025-05-06 00:52:07 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:52:07.730288 | orchestrator | 2025-05-06 00:52:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:52:10.788183 | orchestrator | 2025-05-06 00:52:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:52:10.788366 | orchestrator | 2025-05-06 00:52:10 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:52:10.791209 | orchestrator | 2025-05-06 00:52:10 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:52:10.794086 | orchestrator | 2025-05-06 00:52:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:52:10.794581 | orchestrator | 2025-05-06 00:52:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:52:13.845430 | orchestrator | 2025-05-06 00:52:13 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:52:13.846969 | orchestrator | 2025-05-06 00:52:13 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:52:13.850119 | orchestrator | 2025-05-06 00:52:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:52:16.897990 | orchestrator | 2025-05-06 00:52:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:52:16.898198 | orchestrator | 2025-05-06 00:52:16 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:52:16.899866 | orchestrator | 2025-05-06 00:52:16 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:52:16.901512 | orchestrator | 2025-05-06 00:52:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:52:16.901802 | orchestrator | 2025-05-06 00:52:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:52:19.957984 | orchestrator | 2025-05-06 00:52:19 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:52:19.959399 | orchestrator | 2025-05-06 00:52:19 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:52:19.962319 | orchestrator | 2025-05-06 00:52:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:52:19.962704 | orchestrator | 2025-05-06 00:52:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:52:23.016447 | orchestrator | 2025-05-06 00:52:23 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:52:23.016849 | orchestrator | 2025-05-06 00:52:23 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:52:23.018562 | orchestrator | 2025-05-06 00:52:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:52:26.081061 | orchestrator | 2025-05-06 00:52:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:52:26.081265 | orchestrator | 2025-05-06 00:52:26 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:52:26.082743 | orchestrator | 2025-05-06 00:52:26 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:52:26.084737 | orchestrator | 2025-05-06 00:52:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:52:29.143362 | orchestrator | 2025-05-06 00:52:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:52:29.143506 | orchestrator | 2025-05-06 00:52:29 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:52:29.143692 | orchestrator | 2025-05-06 00:52:29 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:52:29.143722 | orchestrator | 2025-05-06 00:52:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:52:32.196962 | orchestrator | 2025-05-06 00:52:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:52:32.197092 | orchestrator | 2025-05-06 00:52:32 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:52:32.206587 | orchestrator | 2025-05-06 00:52:32 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:52:32.207193 | orchestrator | 2025-05-06 00:52:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:52:32.208176 | orchestrator | 2025-05-06 00:52:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:52:35.283816 | orchestrator | 2025-05-06 00:52:35 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:52:35.284292 | orchestrator | 2025-05-06 00:52:35 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:52:35.284368 | orchestrator | 2025-05-06 00:52:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:52:38.345810 | orchestrator | 2025-05-06 00:52:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:52:38.345972 | orchestrator | 2025-05-06 00:52:38 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:52:38.346187 | orchestrator | 2025-05-06 00:52:38 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:52:38.347892 | orchestrator | 2025-05-06 00:52:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:52:38.348231 | orchestrator | 2025-05-06 00:52:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:52:41.412631 | orchestrator | 2025-05-06 00:52:41 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:52:41.413977 | orchestrator | 2025-05-06 00:52:41 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:52:41.416951 | orchestrator | 2025-05-06 00:52:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:52:44.471650 | orchestrator | 2025-05-06 00:52:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:52:44.471797 | orchestrator | 2025-05-06 00:52:44 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:52:44.474006 | orchestrator | 2025-05-06 00:52:44 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:52:44.475164 | orchestrator | 2025-05-06 00:52:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:52:44.475255 | orchestrator | 2025-05-06 00:52:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:52:47.520166 | orchestrator | 2025-05-06 00:52:47 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:52:47.520570 | orchestrator | 2025-05-06 00:52:47 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:52:47.521453 | orchestrator | 2025-05-06 00:52:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:52:50.577634 | orchestrator | 2025-05-06 00:52:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:52:50.577806 | orchestrator | 2025-05-06 00:52:50 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:52:50.581508 | orchestrator | 2025-05-06 00:52:50 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:52:50.583540 | orchestrator | 2025-05-06 00:52:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:52:50.583710 | orchestrator | 2025-05-06 00:52:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:52:53.650306 | orchestrator | 2025-05-06 00:52:53 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:52:53.650583 | orchestrator | 2025-05-06 00:52:53 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:52:53.650624 | orchestrator | 2025-05-06 00:52:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:52:53.650686 | orchestrator | 2025-05-06 00:52:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:52:56.688325 | orchestrator | 2025-05-06 00:52:56 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:52:56.688532 | orchestrator | 2025-05-06 00:52:56 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:52:56.689640 | orchestrator | 2025-05-06 00:52:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:52:59.735109 | orchestrator | 2025-05-06 00:52:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:52:59.735266 | orchestrator | 2025-05-06 00:52:59 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:52:59.735852 | orchestrator | 2025-05-06 00:52:59 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:52:59.736927 | orchestrator | 2025-05-06 00:52:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:53:02.788665 | orchestrator | 2025-05-06 00:52:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:53:02.788814 | orchestrator | 2025-05-06 00:53:02 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:53:02.792498 | orchestrator | 2025-05-06 00:53:02 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:53:02.794164 | orchestrator | 2025-05-06 00:53:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:53:05.841727 | orchestrator | 2025-05-06 00:53:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:53:05.841877 | orchestrator | 2025-05-06 00:53:05 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:53:05.842536 | orchestrator | 2025-05-06 00:53:05 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:53:05.844929 | orchestrator | 2025-05-06 00:53:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:53:05.845162 | orchestrator | 2025-05-06 00:53:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:53:08.900746 | orchestrator | 2025-05-06 00:53:08 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:53:08.902540 | orchestrator | 2025-05-06 00:53:08 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:53:08.908073 | orchestrator | 2025-05-06 00:53:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:53:11.961731 | orchestrator | 2025-05-06 00:53:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:53:11.961882 | orchestrator | 2025-05-06 00:53:11 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:53:11.964254 | orchestrator | 2025-05-06 00:53:11 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:53:11.965718 | orchestrator | 2025-05-06 00:53:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:53:11.965969 | orchestrator | 2025-05-06 00:53:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:53:15.015119 | orchestrator | 2025-05-06 00:53:15 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:53:15.017652 | orchestrator | 2025-05-06 00:53:15 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:53:18.059616 | orchestrator | 2025-05-06 00:53:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:53:18.059741 | orchestrator | 2025-05-06 00:53:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:53:18.059780 | orchestrator | 2025-05-06 00:53:18 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:53:18.060569 | orchestrator | 2025-05-06 00:53:18 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:53:18.061379 | orchestrator | 2025-05-06 00:53:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:53:21.101448 | orchestrator | 2025-05-06 00:53:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:53:21.101551 | orchestrator | 2025-05-06 00:53:21 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:53:21.103201 | orchestrator | 2025-05-06 00:53:21 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:53:21.104821 | orchestrator | 2025-05-06 00:53:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:53:24.163348 | orchestrator | 2025-05-06 00:53:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:53:24.163491 | orchestrator | 2025-05-06 00:53:24 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:53:24.164901 | orchestrator | 2025-05-06 00:53:24 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:53:24.166752 | orchestrator | 2025-05-06 00:53:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:53:24.167051 | orchestrator | 2025-05-06 00:53:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:53:27.221386 | orchestrator | 2025-05-06 00:53:27 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:53:27.222667 | orchestrator | 2025-05-06 00:53:27 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:53:27.222717 | orchestrator | 2025-05-06 00:53:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:53:30.266860 | orchestrator | 2025-05-06 00:53:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:53:30.267143 | orchestrator | 2025-05-06 00:53:30 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:53:30.267413 | orchestrator | 2025-05-06 00:53:30 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:53:30.268947 | orchestrator | 2025-05-06 00:53:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:53:30.269141 | orchestrator | 2025-05-06 00:53:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:53:33.313464 | orchestrator | 2025-05-06 00:53:33 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:53:33.315306 | orchestrator | 2025-05-06 00:53:33 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:53:33.316748 | orchestrator | 2025-05-06 00:53:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:53:36.364847 | orchestrator | 2025-05-06 00:53:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:53:36.365036 | orchestrator | 2025-05-06 00:53:36 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:53:36.366863 | orchestrator | 2025-05-06 00:53:36 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:53:36.369324 | orchestrator | 2025-05-06 00:53:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:53:36.369890 | orchestrator | 2025-05-06 00:53:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:53:39.421836 | orchestrator | 2025-05-06 00:53:39 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:53:39.422270 | orchestrator | 2025-05-06 00:53:39 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:53:39.424778 | orchestrator | 2025-05-06 00:53:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:53:42.467174 | orchestrator | 2025-05-06 00:53:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:53:42.467328 | orchestrator | 2025-05-06 00:53:42 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:53:42.467468 | orchestrator | 2025-05-06 00:53:42 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:53:42.469381 | orchestrator | 2025-05-06 00:53:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:53:42.469467 | orchestrator | 2025-05-06 00:53:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:53:45.517758 | orchestrator | 2025-05-06 00:53:45 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:53:45.518208 | orchestrator | 2025-05-06 00:53:45 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:53:45.519140 | orchestrator | 2025-05-06 00:53:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:53:45.519253 | orchestrator | 2025-05-06 00:53:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:53:48.574332 | orchestrator | 2025-05-06 00:53:48 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:53:48.576113 | orchestrator | 2025-05-06 00:53:48 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:53:48.579254 | orchestrator | 2025-05-06 00:53:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:53:51.627510 | orchestrator | 2025-05-06 00:53:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:53:51.627683 | orchestrator | 2025-05-06 00:53:51 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state STARTED 2025-05-06 00:53:51.627861 | orchestrator | 2025-05-06 00:53:51 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:53:51.628825 | orchestrator | 2025-05-06 00:53:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:53:51.629004 | orchestrator | 2025-05-06 00:53:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:53:54.688679 | orchestrator | 2025-05-06 00:53:54 | INFO  | Task fa78f235-4ac5-47b6-b4a7-4c0a2ed8e33d is in state SUCCESS 2025-05-06 00:53:54.690306 | orchestrator | 2025-05-06 00:53:54.690362 | orchestrator | 2025-05-06 00:53:54.690392 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 00:53:54.690409 | orchestrator | 2025-05-06 00:53:54.690423 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 00:53:54.690438 | orchestrator | Tuesday 06 May 2025 00:46:58 +0000 (0:00:00.480) 0:00:00.480 *********** 2025-05-06 00:53:54.690452 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:53:54.690615 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:53:54.690725 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:53:54.690740 | orchestrator | 2025-05-06 00:53:54.690786 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 00:53:54.690801 | orchestrator | Tuesday 06 May 2025 00:46:59 +0000 (0:00:00.687) 0:00:01.168 *********** 2025-05-06 00:53:54.690816 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-06 00:53:54.690830 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-06 00:53:54.690845 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-06 00:53:54.690858 | orchestrator | 2025-05-06 00:53:54.690872 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-06 00:53:54.690886 | orchestrator | 2025-05-06 00:53:54.690900 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-06 00:53:54.690914 | orchestrator | Tuesday 06 May 2025 00:46:59 +0000 (0:00:00.350) 0:00:01.519 *********** 2025-05-06 00:53:54.691000 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.691015 | orchestrator | 2025-05-06 00:53:54.691030 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-06 00:53:54.691044 | orchestrator | Tuesday 06 May 2025 00:47:00 +0000 (0:00:00.838) 0:00:02.357 *********** 2025-05-06 00:53:54.691058 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:53:54.691072 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:53:54.691086 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:53:54.691100 | orchestrator | 2025-05-06 00:53:54.691114 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-06 00:53:54.691128 | orchestrator | Tuesday 06 May 2025 00:47:01 +0000 (0:00:00.715) 0:00:03.073 *********** 2025-05-06 00:53:54.691141 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.691155 | orchestrator | 2025-05-06 00:53:54.691180 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-06 00:53:54.691195 | orchestrator | Tuesday 06 May 2025 00:47:01 +0000 (0:00:00.695) 0:00:03.768 *********** 2025-05-06 00:53:54.691209 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:53:54.691223 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:53:54.691237 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:53:54.691642 | orchestrator | 2025-05-06 00:53:54.691662 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-06 00:53:54.691675 | orchestrator | Tuesday 06 May 2025 00:47:02 +0000 (0:00:00.857) 0:00:04.626 *********** 2025-05-06 00:53:54.691687 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-06 00:53:54.691700 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-06 00:53:54.691713 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-06 00:53:54.691739 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-06 00:53:54.691752 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-06 00:53:54.691764 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-06 00:53:54.691776 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-06 00:53:54.691789 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-06 00:53:54.691801 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-06 00:53:54.691814 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-06 00:53:54.691826 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-06 00:53:54.691838 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-06 00:53:54.691850 | orchestrator | 2025-05-06 00:53:54.691863 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-06 00:53:54.691886 | orchestrator | Tuesday 06 May 2025 00:47:06 +0000 (0:00:03.860) 0:00:08.486 *********** 2025-05-06 00:53:54.691899 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-06 00:53:54.691933 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-06 00:53:54.691947 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-06 00:53:54.691960 | orchestrator | 2025-05-06 00:53:54.691973 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-06 00:53:54.691985 | orchestrator | Tuesday 06 May 2025 00:47:08 +0000 (0:00:01.410) 0:00:09.896 *********** 2025-05-06 00:53:54.691998 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-06 00:53:54.692010 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-06 00:53:54.692054 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-06 00:53:54.692378 | orchestrator | 2025-05-06 00:53:54.692396 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-06 00:53:54.692441 | orchestrator | Tuesday 06 May 2025 00:47:10 +0000 (0:00:02.071) 0:00:11.968 *********** 2025-05-06 00:53:54.692455 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-06 00:53:54.692468 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.692494 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-06 00:53:54.692508 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.692547 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-06 00:53:54.692563 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.692575 | orchestrator | 2025-05-06 00:53:54.692587 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-06 00:53:54.692600 | orchestrator | Tuesday 06 May 2025 00:47:11 +0000 (0:00:00.966) 0:00:12.935 *********** 2025-05-06 00:53:54.692614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-06 00:53:54.692633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-06 00:53:54.692646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-06 00:53:54.692659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-06 00:53:54.692684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-06 00:53:54.692704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-06 00:53:54.692718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-06 00:53:54.692732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-06 00:53:54.692746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-06 00:53:54.692759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-06 00:53:54.692781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-06 00:53:54.692794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-06 00:53:54.692807 | orchestrator | 2025-05-06 00:53:54.692820 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-06 00:53:54.692833 | orchestrator | Tuesday 06 May 2025 00:47:13 +0000 (0:00:02.307) 0:00:15.242 *********** 2025-05-06 00:53:54.692845 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.692868 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.692882 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.692894 | orchestrator | 2025-05-06 00:53:54.692911 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-06 00:53:54.693114 | orchestrator | Tuesday 06 May 2025 00:47:14 +0000 (0:00:01.357) 0:00:16.600 *********** 2025-05-06 00:53:54.693256 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-06 00:53:54.693272 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-06 00:53:54.693284 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-06 00:53:54.693296 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-06 00:53:54.693309 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-06 00:53:54.693321 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-06 00:53:54.693334 | orchestrator | 2025-05-06 00:53:54.693346 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-06 00:53:54.693359 | orchestrator | Tuesday 06 May 2025 00:47:17 +0000 (0:00:02.596) 0:00:19.196 *********** 2025-05-06 00:53:54.693371 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.693384 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.693396 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.693408 | orchestrator | 2025-05-06 00:53:54.693421 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-06 00:53:54.693433 | orchestrator | Tuesday 06 May 2025 00:47:18 +0000 (0:00:01.425) 0:00:20.622 *********** 2025-05-06 00:53:54.693446 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:53:54.693459 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:53:54.693472 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:53:54.693484 | orchestrator | 2025-05-06 00:53:54.693497 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-06 00:53:54.693509 | orchestrator | Tuesday 06 May 2025 00:47:21 +0000 (0:00:02.864) 0:00:23.486 *********** 2025-05-06 00:53:54.693522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-06 00:53:54.693556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-06 00:53:54.693570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-06 00:53:54.693583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-06 00:53:54.693644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-06 00:53:54.693670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-06 00:53:54.693713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-06 00:53:54.693735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-06 00:53:54.693748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-06 00:53:54.693874 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.693977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-06 00:53:54.693994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-06 00:53:54.694007 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.694076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-06 00:53:54.694091 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.694103 | orchestrator | 2025-05-06 00:53:54.694116 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-06 00:53:54.694128 | orchestrator | Tuesday 06 May 2025 00:47:23 +0000 (0:00:02.327) 0:00:25.814 *********** 2025-05-06 00:53:54.694151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-06 00:53:54.694164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-06 00:53:54.694177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-06 00:53:54.694190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-06 00:53:54.694209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-06 00:53:54.694223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-06 00:53:54.694236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-06 00:53:54.694262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-06 00:53:54.694276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-06 00:53:54.694288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-06 00:53:54.694302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-06 00:53:54.694330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-06 00:53:54.696289 | orchestrator | 2025-05-06 00:53:54.696388 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-06 00:53:54.696411 | orchestrator | Tuesday 06 May 2025 00:47:28 +0000 (0:00:04.443) 0:00:30.257 *********** 2025-05-06 00:53:54.696452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-06 00:53:54.696471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-06 00:53:54.696486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-06 00:53:54.696501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-06 00:53:54.696516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-06 00:53:54.696549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-06 00:53:54.696584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-06 00:53:54.696601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-06 00:53:54.696615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-06 00:53:54.696630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-06 00:53:54.696645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-06 00:53:54.696661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-06 00:53:54.696675 | orchestrator | 2025-05-06 00:53:54.696690 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-06 00:53:54.696705 | orchestrator | Tuesday 06 May 2025 00:47:31 +0000 (0:00:03.564) 0:00:33.822 *********** 2025-05-06 00:53:54.696732 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-06 00:53:54.696749 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-06 00:53:54.696763 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-06 00:53:54.696777 | orchestrator | 2025-05-06 00:53:54.696791 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-06 00:53:54.696805 | orchestrator | Tuesday 06 May 2025 00:47:34 +0000 (0:00:02.175) 0:00:35.997 *********** 2025-05-06 00:53:54.696822 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-06 00:53:54.696838 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-06 00:53:54.696854 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-06 00:53:54.696870 | orchestrator | 2025-05-06 00:53:54.696886 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-06 00:53:54.696902 | orchestrator | Tuesday 06 May 2025 00:47:38 +0000 (0:00:04.012) 0:00:40.010 *********** 2025-05-06 00:53:54.696943 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.696960 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.696976 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.696991 | orchestrator | 2025-05-06 00:53:54.697007 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-06 00:53:54.697023 | orchestrator | Tuesday 06 May 2025 00:47:40 +0000 (0:00:02.213) 0:00:42.224 *********** 2025-05-06 00:53:54.697039 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-06 00:53:54.697056 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-06 00:53:54.697072 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-06 00:53:54.697088 | orchestrator | 2025-05-06 00:53:54.697104 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-06 00:53:54.697119 | orchestrator | Tuesday 06 May 2025 00:47:42 +0000 (0:00:02.593) 0:00:44.817 *********** 2025-05-06 00:53:54.697135 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-06 00:53:54.697151 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-06 00:53:54.697167 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-06 00:53:54.697183 | orchestrator | 2025-05-06 00:53:54.697198 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-06 00:53:54.697212 | orchestrator | Tuesday 06 May 2025 00:47:45 +0000 (0:00:02.164) 0:00:46.982 *********** 2025-05-06 00:53:54.697226 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-06 00:53:54.697255 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-06 00:53:54.697271 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-06 00:53:54.697284 | orchestrator | 2025-05-06 00:53:54.697298 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-06 00:53:54.697312 | orchestrator | Tuesday 06 May 2025 00:47:46 +0000 (0:00:01.661) 0:00:48.644 *********** 2025-05-06 00:53:54.697326 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-06 00:53:54.697340 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-06 00:53:54.697354 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-06 00:53:54.697367 | orchestrator | 2025-05-06 00:53:54.697389 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-06 00:53:54.697403 | orchestrator | Tuesday 06 May 2025 00:47:48 +0000 (0:00:01.604) 0:00:50.249 *********** 2025-05-06 00:53:54.697417 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.697431 | orchestrator | 2025-05-06 00:53:54.697445 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-06 00:53:54.697459 | orchestrator | Tuesday 06 May 2025 00:47:49 +0000 (0:00:00.855) 0:00:51.105 *********** 2025-05-06 00:53:54.697474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-06 00:53:54.697508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-06 00:53:54.697529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-06 00:53:54.697545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-06 00:53:54.697560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-06 00:53:54.697574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-06 00:53:54.697595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-06 00:53:54.697620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-06 00:53:54.697640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-06 00:53:54.697655 | orchestrator | 2025-05-06 00:53:54.697669 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-06 00:53:54.697685 | orchestrator | Tuesday 06 May 2025 00:47:52 +0000 (0:00:03.319) 0:00:54.424 *********** 2025-05-06 00:53:54.697699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-06 00:53:54.697714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-06 00:53:54.697728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-06 00:53:54.697749 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.697764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-06 00:53:54.697778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-06 00:53:54.697804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-06 00:53:54.697820 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.697835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-06 00:53:54.697850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-06 00:53:54.697864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-06 00:53:54.697884 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.697899 | orchestrator | 2025-05-06 00:53:54.697913 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-06 00:53:54.697962 | orchestrator | Tuesday 06 May 2025 00:47:53 +0000 (0:00:00.729) 0:00:55.154 *********** 2025-05-06 00:53:54.697977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-06 00:53:54.697992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-06 00:53:54.698106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-06 00:53:54.698130 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.698145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-06 00:53:54.698160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-06 00:53:54.698174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-06 00:53:54.698197 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.698212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-06 00:53:54.698227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-06 00:53:54.698247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-06 00:53:54.698262 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.698277 | orchestrator | 2025-05-06 00:53:54.698291 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-06 00:53:54.698306 | orchestrator | Tuesday 06 May 2025 00:47:54 +0000 (0:00:01.067) 0:00:56.221 *********** 2025-05-06 00:53:54.698326 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-06 00:53:54.698341 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-06 00:53:54.698355 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-06 00:53:54.698369 | orchestrator | 2025-05-06 00:53:54.698383 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-06 00:53:54.698397 | orchestrator | Tuesday 06 May 2025 00:47:56 +0000 (0:00:02.026) 0:00:58.248 *********** 2025-05-06 00:53:54.698412 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-06 00:53:54.698428 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-06 00:53:54.698451 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-06 00:53:54.698483 | orchestrator | 2025-05-06 00:53:54.698512 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-06 00:53:54.698535 | orchestrator | Tuesday 06 May 2025 00:47:58 +0000 (0:00:02.000) 0:01:00.248 *********** 2025-05-06 00:53:54.698558 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-06 00:53:54.698592 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-06 00:53:54.698615 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-06 00:53:54.698640 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-06 00:53:54.698664 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.698688 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-06 00:53:54.698707 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.698722 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-06 00:53:54.698735 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.698749 | orchestrator | 2025-05-06 00:53:54.698763 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-06 00:53:54.698778 | orchestrator | Tuesday 06 May 2025 00:47:59 +0000 (0:00:01.571) 0:01:01.820 *********** 2025-05-06 00:53:54.698793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-06 00:53:54.698812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-06 00:53:54.698846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-06 00:53:54.698888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-06 00:53:54.698913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-06 00:53:54.698987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-06 00:53:54.699004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-06 00:53:54.699019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-06 00:53:54.699034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-06 00:53:54.699048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-06 00:53:54.699073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-06 00:53:54.699106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce', '__omit_place_holder__731e7921deb4e5fb16777ba92e077f62255a3dce'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-06 00:53:54.699122 | orchestrator | 2025-05-06 00:53:54.699136 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-06 00:53:54.699150 | orchestrator | Tuesday 06 May 2025 00:48:03 +0000 (0:00:03.529) 0:01:05.349 *********** 2025-05-06 00:53:54.699165 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.699180 | orchestrator | 2025-05-06 00:53:54.699194 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-06 00:53:54.699209 | orchestrator | Tuesday 06 May 2025 00:48:04 +0000 (0:00:00.815) 0:01:06.164 *********** 2025-05-06 00:53:54.699223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-06 00:53:54.699239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-06 00:53:54.699254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.699286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.699314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-06 00:53:54.699334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-06 00:53:54.699350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-06 00:53:54.699364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-06 00:53:54.699379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.699399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.699421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.699436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.699450 | orchestrator | 2025-05-06 00:53:54.699464 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-06 00:53:54.699479 | orchestrator | Tuesday 06 May 2025 00:48:08 +0000 (0:00:03.904) 0:01:10.069 *********** 2025-05-06 00:53:54.699493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-06 00:53:54.699507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-06 00:53:54.699522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.699542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.699562 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.699605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-06 00:53:54.699622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-06 00:53:54.699641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.699673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.699704 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.699728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-06 00:53:54.699777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-06 00:53:54.699822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.699851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.699875 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.699899 | orchestrator | 2025-05-06 00:53:54.699952 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-06 00:53:54.699982 | orchestrator | Tuesday 06 May 2025 00:48:09 +0000 (0:00:00.867) 0:01:10.936 *********** 2025-05-06 00:53:54.700010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-06 00:53:54.700032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-06 00:53:54.700047 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.700061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-06 00:53:54.700075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-06 00:53:54.700089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-06 00:53:54.700104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-06 00:53:54.700118 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.700132 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.700146 | orchestrator | 2025-05-06 00:53:54.700160 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-06 00:53:54.700173 | orchestrator | Tuesday 06 May 2025 00:48:10 +0000 (0:00:01.424) 0:01:12.361 *********** 2025-05-06 00:53:54.700197 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.700211 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.700224 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.700238 | orchestrator | 2025-05-06 00:53:54.700252 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-06 00:53:54.700266 | orchestrator | Tuesday 06 May 2025 00:48:11 +0000 (0:00:01.265) 0:01:13.627 *********** 2025-05-06 00:53:54.700280 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.700294 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.700307 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.700321 | orchestrator | 2025-05-06 00:53:54.700335 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-06 00:53:54.700349 | orchestrator | Tuesday 06 May 2025 00:48:13 +0000 (0:00:02.116) 0:01:15.744 *********** 2025-05-06 00:53:54.700369 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.700383 | orchestrator | 2025-05-06 00:53:54.700397 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-06 00:53:54.700411 | orchestrator | Tuesday 06 May 2025 00:48:14 +0000 (0:00:00.814) 0:01:16.559 *********** 2025-05-06 00:53:54.700437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.700453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.700468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.700483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.700517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.700539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.700554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.700569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.700584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.700605 | orchestrator | 2025-05-06 00:53:54.700620 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-06 00:53:54.700634 | orchestrator | Tuesday 06 May 2025 00:48:19 +0000 (0:00:04.862) 0:01:21.421 *********** 2025-05-06 00:53:54.700661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.700685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.700700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.700714 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.700729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.700744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.700777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.700793 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.700814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.700830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.700845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.700859 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.700873 | orchestrator | 2025-05-06 00:53:54.700887 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-06 00:53:54.700902 | orchestrator | Tuesday 06 May 2025 00:48:20 +0000 (0:00:00.756) 0:01:22.178 *********** 2025-05-06 00:53:54.700977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-06 00:53:54.700994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-06 00:53:54.701010 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.701025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-06 00:53:54.701045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-06 00:53:54.701060 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.701075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-06 00:53:54.701089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-06 00:53:54.701103 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.701117 | orchestrator | 2025-05-06 00:53:54.701132 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-06 00:53:54.701146 | orchestrator | Tuesday 06 May 2025 00:48:21 +0000 (0:00:00.834) 0:01:23.012 *********** 2025-05-06 00:53:54.701160 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.701174 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.701188 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.701202 | orchestrator | 2025-05-06 00:53:54.701216 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-06 00:53:54.701230 | orchestrator | Tuesday 06 May 2025 00:48:22 +0000 (0:00:01.131) 0:01:24.143 *********** 2025-05-06 00:53:54.701244 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.701258 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.701272 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.701294 | orchestrator | 2025-05-06 00:53:54.701318 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-06 00:53:54.701344 | orchestrator | Tuesday 06 May 2025 00:48:24 +0000 (0:00:02.036) 0:01:26.180 *********** 2025-05-06 00:53:54.701371 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.701389 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.701403 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.701417 | orchestrator | 2025-05-06 00:53:54.701439 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-06 00:53:54.701454 | orchestrator | Tuesday 06 May 2025 00:48:24 +0000 (0:00:00.320) 0:01:26.501 *********** 2025-05-06 00:53:54.701469 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.701482 | orchestrator | 2025-05-06 00:53:54.701494 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-06 00:53:54.701507 | orchestrator | Tuesday 06 May 2025 00:48:25 +0000 (0:00:00.856) 0:01:27.358 *********** 2025-05-06 00:53:54.701520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-06 00:53:54.701554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-06 00:53:54.701568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-06 00:53:54.701581 | orchestrator | 2025-05-06 00:53:54.701594 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-06 00:53:54.701606 | orchestrator | Tuesday 06 May 2025 00:48:29 +0000 (0:00:04.128) 0:01:31.487 *********** 2025-05-06 00:53:54.701619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-06 00:53:54.701632 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.701659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-06 00:53:54.701679 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.701692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-06 00:53:54.701705 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.701718 | orchestrator | 2025-05-06 00:53:54.701731 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-06 00:53:54.701743 | orchestrator | Tuesday 06 May 2025 00:48:31 +0000 (0:00:01.802) 0:01:33.289 *********** 2025-05-06 00:53:54.701755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-06 00:53:54.701770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-06 00:53:54.701783 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.701796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-06 00:53:54.701809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-06 00:53:54.701822 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.701835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-06 00:53:54.701857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-06 00:53:54.701877 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.701890 | orchestrator | 2025-05-06 00:53:54.701903 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-06 00:53:54.702064 | orchestrator | Tuesday 06 May 2025 00:48:33 +0000 (0:00:02.172) 0:01:35.461 *********** 2025-05-06 00:53:54.702119 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.702134 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.702147 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.702159 | orchestrator | 2025-05-06 00:53:54.702171 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-06 00:53:54.702184 | orchestrator | Tuesday 06 May 2025 00:48:34 +0000 (0:00:00.702) 0:01:36.164 *********** 2025-05-06 00:53:54.702196 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.702209 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.702221 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.702233 | orchestrator | 2025-05-06 00:53:54.702246 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-06 00:53:54.702258 | orchestrator | Tuesday 06 May 2025 00:48:35 +0000 (0:00:01.329) 0:01:37.493 *********** 2025-05-06 00:53:54.702270 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.702282 | orchestrator | 2025-05-06 00:53:54.702294 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-06 00:53:54.702307 | orchestrator | Tuesday 06 May 2025 00:48:36 +0000 (0:00:00.781) 0:01:38.275 *********** 2025-05-06 00:53:54.702320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.702335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.702348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.702392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.702405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.702416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.702426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.702437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.702458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.702475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.702485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.702496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.702507 | orchestrator | 2025-05-06 00:53:54.702517 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-06 00:53:54.702533 | orchestrator | Tuesday 06 May 2025 00:48:40 +0000 (0:00:04.550) 0:01:42.825 *********** 2025-05-06 00:53:54.702544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.702566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.702582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.702594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.702604 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.702615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.702626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.702647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.702664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.702675 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.702686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.702696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.702706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.702728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.702739 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.702749 | orchestrator | 2025-05-06 00:53:54.702759 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-06 00:53:54.702770 | orchestrator | Tuesday 06 May 2025 00:48:41 +0000 (0:00:00.752) 0:01:43.578 *********** 2025-05-06 00:53:54.702780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-06 00:53:54.702795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-06 00:53:54.702806 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.702816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-06 00:53:54.702826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-06 00:53:54.702837 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.702847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-06 00:53:54.702858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-06 00:53:54.702868 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.702878 | orchestrator | 2025-05-06 00:53:54.702889 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-06 00:53:54.702899 | orchestrator | Tuesday 06 May 2025 00:48:42 +0000 (0:00:00.939) 0:01:44.518 *********** 2025-05-06 00:53:54.702909 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.702938 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.702949 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.702959 | orchestrator | 2025-05-06 00:53:54.702969 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-06 00:53:54.702979 | orchestrator | Tuesday 06 May 2025 00:48:43 +0000 (0:00:01.226) 0:01:45.744 *********** 2025-05-06 00:53:54.702990 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.703000 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.703010 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.703020 | orchestrator | 2025-05-06 00:53:54.703030 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-06 00:53:54.703040 | orchestrator | Tuesday 06 May 2025 00:48:46 +0000 (0:00:02.136) 0:01:47.881 *********** 2025-05-06 00:53:54.703051 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.703065 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.703082 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.703092 | orchestrator | 2025-05-06 00:53:54.703102 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-06 00:53:54.703112 | orchestrator | Tuesday 06 May 2025 00:48:46 +0000 (0:00:00.266) 0:01:48.148 *********** 2025-05-06 00:53:54.703123 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.703133 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.703143 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.703153 | orchestrator | 2025-05-06 00:53:54.703164 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-06 00:53:54.703174 | orchestrator | Tuesday 06 May 2025 00:48:46 +0000 (0:00:00.422) 0:01:48.570 *********** 2025-05-06 00:53:54.703184 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.703194 | orchestrator | 2025-05-06 00:53:54.703204 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-06 00:53:54.703215 | orchestrator | Tuesday 06 May 2025 00:48:47 +0000 (0:00:01.042) 0:01:49.613 *********** 2025-05-06 00:53:54.703225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-06 00:53:54.703242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-06 00:53:54.703259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-06 00:53:54.703341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-06 00:53:54.703353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-06 00:53:54.703434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-06 00:53:54.703450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703512 | orchestrator | 2025-05-06 00:53:54.703527 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-06 00:53:54.703538 | orchestrator | Tuesday 06 May 2025 00:48:52 +0000 (0:00:04.673) 0:01:54.286 *********** 2025-05-06 00:53:54.703548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-06 00:53:54.703563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-06 00:53:54.703574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703643 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.703654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-06 00:53:54.703665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-06 00:53:54.703681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703744 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.703761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-06 00:53:54.703772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-06 00:53:54.703783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.703852 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.703862 | orchestrator | 2025-05-06 00:53:54.703873 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-06 00:53:54.703883 | orchestrator | Tuesday 06 May 2025 00:48:53 +0000 (0:00:00.769) 0:01:55.056 *********** 2025-05-06 00:53:54.703893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-06 00:53:54.703904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-06 00:53:54.703914 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.703939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-06 00:53:54.703950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-06 00:53:54.703960 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.703970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-06 00:53:54.703980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-06 00:53:54.703990 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.704000 | orchestrator | 2025-05-06 00:53:54.704011 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-06 00:53:54.704021 | orchestrator | Tuesday 06 May 2025 00:48:54 +0000 (0:00:01.248) 0:01:56.304 *********** 2025-05-06 00:53:54.704031 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.704041 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.704051 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.704061 | orchestrator | 2025-05-06 00:53:54.704071 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-06 00:53:54.704081 | orchestrator | Tuesday 06 May 2025 00:48:55 +0000 (0:00:01.300) 0:01:57.604 *********** 2025-05-06 00:53:54.704091 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.704108 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.704118 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.704128 | orchestrator | 2025-05-06 00:53:54.704138 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-06 00:53:54.704148 | orchestrator | Tuesday 06 May 2025 00:48:57 +0000 (0:00:01.982) 0:01:59.587 *********** 2025-05-06 00:53:54.704158 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.704169 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.704179 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.704189 | orchestrator | 2025-05-06 00:53:54.704199 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-06 00:53:54.704214 | orchestrator | Tuesday 06 May 2025 00:48:58 +0000 (0:00:00.456) 0:02:00.043 *********** 2025-05-06 00:53:54.704224 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.704234 | orchestrator | 2025-05-06 00:53:54.704245 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-06 00:53:54.704255 | orchestrator | Tuesday 06 May 2025 00:48:59 +0000 (0:00:01.115) 0:02:01.158 *********** 2025-05-06 00:53:54.704265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-06 00:53:54.704277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-06 00:53:54.704306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-06 00:53:54.704325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-06 00:53:54.704348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-06 00:53:54.704366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-06 00:53:54.704385 | orchestrator | 2025-05-06 00:53:54.704396 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-06 00:53:54.704406 | orchestrator | Tuesday 06 May 2025 00:49:05 +0000 (0:00:05.796) 0:02:06.955 *********** 2025-05-06 00:53:54.704422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-06 00:53:54.706205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-06 00:53:54.706294 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.706369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-06 00:53:54.706407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-06 00:53:54.706431 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.706445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-06 00:53:54.706477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-06 00:53:54.706511 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.706524 | orchestrator | 2025-05-06 00:53:54.706538 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-06 00:53:54.706555 | orchestrator | Tuesday 06 May 2025 00:49:08 +0000 (0:00:03.462) 0:02:10.417 *********** 2025-05-06 00:53:54.706569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-06 00:53:54.706589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-06 00:53:54.706603 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.706616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-06 00:53:54.706636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-06 00:53:54.706650 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.706663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-06 00:53:54.706676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-06 00:53:54.706689 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.706702 | orchestrator | 2025-05-06 00:53:54.706714 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-06 00:53:54.706727 | orchestrator | Tuesday 06 May 2025 00:49:12 +0000 (0:00:03.999) 0:02:14.416 *********** 2025-05-06 00:53:54.706740 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.706752 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.706765 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.706777 | orchestrator | 2025-05-06 00:53:54.706798 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-06 00:53:54.706820 | orchestrator | Tuesday 06 May 2025 00:49:13 +0000 (0:00:01.137) 0:02:15.553 *********** 2025-05-06 00:53:54.706841 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.706862 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.706885 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.706906 | orchestrator | 2025-05-06 00:53:54.706957 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-06 00:53:54.706972 | orchestrator | Tuesday 06 May 2025 00:49:15 +0000 (0:00:01.748) 0:02:17.302 *********** 2025-05-06 00:53:54.706993 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.707006 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.707018 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.707031 | orchestrator | 2025-05-06 00:53:54.707043 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-06 00:53:54.707055 | orchestrator | Tuesday 06 May 2025 00:49:15 +0000 (0:00:00.342) 0:02:17.645 *********** 2025-05-06 00:53:54.707068 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.707080 | orchestrator | 2025-05-06 00:53:54.707093 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-06 00:53:54.707105 | orchestrator | Tuesday 06 May 2025 00:49:16 +0000 (0:00:01.034) 0:02:18.680 *********** 2025-05-06 00:53:54.707119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-06 00:53:54.707133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-06 00:53:54.707154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-06 00:53:54.707168 | orchestrator | 2025-05-06 00:53:54.707180 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-06 00:53:54.707193 | orchestrator | Tuesday 06 May 2025 00:49:21 +0000 (0:00:04.850) 0:02:23.530 *********** 2025-05-06 00:53:54.707206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-06 00:53:54.707219 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.707231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-06 00:53:54.707250 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.707263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-06 00:53:54.707275 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.707288 | orchestrator | 2025-05-06 00:53:54.707300 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-06 00:53:54.707313 | orchestrator | Tuesday 06 May 2025 00:49:22 +0000 (0:00:00.416) 0:02:23.946 *********** 2025-05-06 00:53:54.707326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-06 00:53:54.707344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-06 00:53:54.707358 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.707422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-06 00:53:54.707440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-06 00:53:54.707453 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.707466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-06 00:53:54.707485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-06 00:53:54.707498 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.707510 | orchestrator | 2025-05-06 00:53:54.707523 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-06 00:53:54.707535 | orchestrator | Tuesday 06 May 2025 00:49:23 +0000 (0:00:01.012) 0:02:24.958 *********** 2025-05-06 00:53:54.707547 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.707559 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.707571 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.707584 | orchestrator | 2025-05-06 00:53:54.707596 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-06 00:53:54.707608 | orchestrator | Tuesday 06 May 2025 00:49:24 +0000 (0:00:01.276) 0:02:26.234 *********** 2025-05-06 00:53:54.707621 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.707633 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.707652 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.707665 | orchestrator | 2025-05-06 00:53:54.707677 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-06 00:53:54.707689 | orchestrator | Tuesday 06 May 2025 00:49:26 +0000 (0:00:02.262) 0:02:28.497 *********** 2025-05-06 00:53:54.707706 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.707726 | orchestrator | 2025-05-06 00:53:54.707746 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-05-06 00:53:54.707768 | orchestrator | Tuesday 06 May 2025 00:49:27 +0000 (0:00:01.191) 0:02:29.689 *********** 2025-05-06 00:53:54.707802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.707818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.707831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.707853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.707874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.707900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.707914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.707971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.707995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.708018 | orchestrator | 2025-05-06 00:53:54.708049 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-05-06 00:53:54.708084 | orchestrator | Tuesday 06 May 2025 00:49:35 +0000 (0:00:07.473) 0:02:37.162 *********** 2025-05-06 00:53:54.708109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.708136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.708150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.708163 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.708176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.708208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.708315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.708338 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.708366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.708382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.708397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.708411 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.708425 | orchestrator | 2025-05-06 00:53:54.708439 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-05-06 00:53:54.708454 | orchestrator | Tuesday 06 May 2025 00:49:36 +0000 (0:00:00.970) 0:02:38.133 *********** 2025-05-06 00:53:54.708468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-06 00:53:54.708483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-06 00:53:54.708505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-06 00:53:54.708528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-06 00:53:54.708543 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.708563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-06 00:53:54.708577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-06 00:53:54.708596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-06 00:53:54.708610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-06 00:53:54.708624 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.708638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-06 00:53:54.708652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-06 00:53:54.708667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-06 00:53:54.708681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-06 00:53:54.708695 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.708709 | orchestrator | 2025-05-06 00:53:54.708723 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-05-06 00:53:54.708737 | orchestrator | Tuesday 06 May 2025 00:49:37 +0000 (0:00:01.272) 0:02:39.405 *********** 2025-05-06 00:53:54.708751 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.708765 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.708779 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.708792 | orchestrator | 2025-05-06 00:53:54.708806 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-05-06 00:53:54.708820 | orchestrator | Tuesday 06 May 2025 00:49:38 +0000 (0:00:01.347) 0:02:40.753 *********** 2025-05-06 00:53:54.708834 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.708848 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.708862 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.708875 | orchestrator | 2025-05-06 00:53:54.708894 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-06 00:53:54.708908 | orchestrator | Tuesday 06 May 2025 00:49:41 +0000 (0:00:02.159) 0:02:42.913 *********** 2025-05-06 00:53:54.708948 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.708963 | orchestrator | 2025-05-06 00:53:54.708977 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-06 00:53:54.708990 | orchestrator | Tuesday 06 May 2025 00:49:42 +0000 (0:00:00.962) 0:02:43.876 *********** 2025-05-06 00:53:54.709024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-06 00:53:54.709043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-06 00:53:54.709082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-06 00:53:54.709107 | orchestrator | 2025-05-06 00:53:54.709122 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-06 00:53:54.709136 | orchestrator | Tuesday 06 May 2025 00:49:45 +0000 (0:00:03.793) 0:02:47.669 *********** 2025-05-06 00:53:54.709150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-06 00:53:54.709172 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.709207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-06 00:53:54.709224 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.709239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-06 00:53:54.709268 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.709283 | orchestrator | 2025-05-06 00:53:54.709302 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-06 00:53:54.709317 | orchestrator | Tuesday 06 May 2025 00:49:46 +0000 (0:00:00.787) 0:02:48.457 *********** 2025-05-06 00:53:54.709332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-06 00:53:54.709347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-06 00:53:54.709362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-06 00:53:54.709377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-06 00:53:54.709392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-06 00:53:54.709407 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.709426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-06 00:53:54.709447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-06 00:53:54.709461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-06 00:53:54.709475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-06 00:53:54.709489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-06 00:53:54.709503 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.709517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-06 00:53:54.709532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-06 00:53:54.709552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-06 00:53:54.709567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-06 00:53:54.709581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-06 00:53:54.709595 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.709608 | orchestrator | 2025-05-06 00:53:54.709623 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-06 00:53:54.709637 | orchestrator | Tuesday 06 May 2025 00:49:47 +0000 (0:00:01.373) 0:02:49.830 *********** 2025-05-06 00:53:54.709651 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.709664 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.709678 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.709692 | orchestrator | 2025-05-06 00:53:54.709705 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-06 00:53:54.709719 | orchestrator | Tuesday 06 May 2025 00:49:49 +0000 (0:00:01.371) 0:02:51.202 *********** 2025-05-06 00:53:54.709733 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.709747 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.709760 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.709774 | orchestrator | 2025-05-06 00:53:54.709788 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-06 00:53:54.709810 | orchestrator | Tuesday 06 May 2025 00:49:51 +0000 (0:00:02.074) 0:02:53.276 *********** 2025-05-06 00:53:54.709824 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.709838 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.709851 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.709865 | orchestrator | 2025-05-06 00:53:54.709879 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-06 00:53:54.709893 | orchestrator | Tuesday 06 May 2025 00:49:51 +0000 (0:00:00.440) 0:02:53.716 *********** 2025-05-06 00:53:54.709907 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.709972 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.709989 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.710004 | orchestrator | 2025-05-06 00:53:54.710054 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-06 00:53:54.710072 | orchestrator | Tuesday 06 May 2025 00:49:52 +0000 (0:00:00.273) 0:02:53.990 *********** 2025-05-06 00:53:54.710087 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.710105 | orchestrator | 2025-05-06 00:53:54.710134 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-06 00:53:54.710160 | orchestrator | Tuesday 06 May 2025 00:49:53 +0000 (0:00:01.181) 0:02:55.171 *********** 2025-05-06 00:53:54.710188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-06 00:53:54.710217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-06 00:53:54.710259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-06 00:53:54.710288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-06 00:53:54.710342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-06 00:53:54.710369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-06 00:53:54.710395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-06 00:53:54.710445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-06 00:53:54.710472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-06 00:53:54.710578 | orchestrator | 2025-05-06 00:53:54.710601 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-06 00:53:54.710616 | orchestrator | Tuesday 06 May 2025 00:49:57 +0000 (0:00:03.926) 0:02:59.097 *********** 2025-05-06 00:53:54.710631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-06 00:53:54.710665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-06 00:53:54.710681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-06 00:53:54.710696 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.710764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-06 00:53:54.710791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-06 00:53:54.710806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-06 00:53:54.710820 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.710835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-06 00:53:54.710861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-06 00:53:54.710876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-06 00:53:54.710891 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.710905 | orchestrator | 2025-05-06 00:53:54.710948 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-06 00:53:54.710968 | orchestrator | Tuesday 06 May 2025 00:49:58 +0000 (0:00:01.069) 0:03:00.167 *********** 2025-05-06 00:53:54.710990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-06 00:53:54.711022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-06 00:53:54.711037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-06 00:53:54.711051 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.711066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-06 00:53:54.711080 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.711094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-06 00:53:54.711108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-06 00:53:54.711122 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.711137 | orchestrator | 2025-05-06 00:53:54.711151 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-06 00:53:54.711165 | orchestrator | Tuesday 06 May 2025 00:49:59 +0000 (0:00:01.208) 0:03:01.376 *********** 2025-05-06 00:53:54.711179 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.711193 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.711207 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.711221 | orchestrator | 2025-05-06 00:53:54.711235 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-06 00:53:54.711248 | orchestrator | Tuesday 06 May 2025 00:50:00 +0000 (0:00:01.390) 0:03:02.766 *********** 2025-05-06 00:53:54.711262 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.711276 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.711290 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.711303 | orchestrator | 2025-05-06 00:53:54.711325 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-06 00:53:54.711350 | orchestrator | Tuesday 06 May 2025 00:50:03 +0000 (0:00:02.436) 0:03:05.203 *********** 2025-05-06 00:53:54.711375 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.711396 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.711420 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.711552 | orchestrator | 2025-05-06 00:53:54.711656 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-06 00:53:54.711671 | orchestrator | Tuesday 06 May 2025 00:50:03 +0000 (0:00:00.295) 0:03:05.498 *********** 2025-05-06 00:53:54.711686 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.711699 | orchestrator | 2025-05-06 00:53:54.711713 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-06 00:53:54.711727 | orchestrator | Tuesday 06 May 2025 00:50:04 +0000 (0:00:01.305) 0:03:06.803 *********** 2025-05-06 00:53:54.711742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-06 00:53:54.711776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.711792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-06 00:53:54.711807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.711822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-06 00:53:54.711843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.711858 | orchestrator | 2025-05-06 00:53:54.711872 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-06 00:53:54.711886 | orchestrator | Tuesday 06 May 2025 00:50:09 +0000 (0:00:04.779) 0:03:11.583 *********** 2025-05-06 00:53:54.711907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-06 00:53:54.711946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.711962 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.711977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-06 00:53:54.711992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.712014 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.712035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-06 00:53:54.712051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.712065 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.712079 | orchestrator | 2025-05-06 00:53:54.712093 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-06 00:53:54.712107 | orchestrator | Tuesday 06 May 2025 00:50:11 +0000 (0:00:01.306) 0:03:12.889 *********** 2025-05-06 00:53:54.712122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-06 00:53:54.712137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-06 00:53:54.712157 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.712172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-06 00:53:54.712187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-06 00:53:54.712201 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.712215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-06 00:53:54.712229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-06 00:53:54.712253 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.712268 | orchestrator | 2025-05-06 00:53:54.712282 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-06 00:53:54.712295 | orchestrator | Tuesday 06 May 2025 00:50:12 +0000 (0:00:01.111) 0:03:14.001 *********** 2025-05-06 00:53:54.712309 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.712323 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.712337 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.712351 | orchestrator | 2025-05-06 00:53:54.712365 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-06 00:53:54.712379 | orchestrator | Tuesday 06 May 2025 00:50:13 +0000 (0:00:01.395) 0:03:15.396 *********** 2025-05-06 00:53:54.712393 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.712408 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.712429 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.712444 | orchestrator | 2025-05-06 00:53:54.712459 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-06 00:53:54.712473 | orchestrator | Tuesday 06 May 2025 00:50:15 +0000 (0:00:02.163) 0:03:17.559 *********** 2025-05-06 00:53:54.712487 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.712501 | orchestrator | 2025-05-06 00:53:54.712515 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-06 00:53:54.712529 | orchestrator | Tuesday 06 May 2025 00:50:16 +0000 (0:00:01.158) 0:03:18.718 *********** 2025-05-06 00:53:54.712550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-06 00:53:54.712565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.712581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.712596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.712617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-06 00:53:54.712632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.712647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.712706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.712724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-06 00:53:54.712746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.712761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.712776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.712791 | orchestrator | 2025-05-06 00:53:54.712806 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-06 00:53:54.712820 | orchestrator | Tuesday 06 May 2025 00:50:21 +0000 (0:00:04.173) 0:03:22.892 *********** 2025-05-06 00:53:54.712841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-06 00:53:54.712857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.712872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.712893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.712908 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.712950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-06 00:53:54.712975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.713008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.713034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.713059 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.713084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-06 00:53:54.713113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.713128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.713143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.713158 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.713172 | orchestrator | 2025-05-06 00:53:54.713187 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-06 00:53:54.713202 | orchestrator | Tuesday 06 May 2025 00:50:21 +0000 (0:00:00.681) 0:03:23.573 *********** 2025-05-06 00:53:54.713216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-06 00:53:54.713252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-06 00:53:54.713268 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.713292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-06 00:53:54.713307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-06 00:53:54.713329 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.713343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-06 00:53:54.713358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-06 00:53:54.713371 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.713385 | orchestrator | 2025-05-06 00:53:54.713399 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-06 00:53:54.713413 | orchestrator | Tuesday 06 May 2025 00:50:22 +0000 (0:00:00.868) 0:03:24.442 *********** 2025-05-06 00:53:54.713427 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.713442 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.713456 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.713470 | orchestrator | 2025-05-06 00:53:54.713484 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-06 00:53:54.713497 | orchestrator | Tuesday 06 May 2025 00:50:23 +0000 (0:00:01.250) 0:03:25.693 *********** 2025-05-06 00:53:54.713511 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.713525 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.713539 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.713553 | orchestrator | 2025-05-06 00:53:54.713567 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-06 00:53:54.713580 | orchestrator | Tuesday 06 May 2025 00:50:25 +0000 (0:00:01.905) 0:03:27.599 *********** 2025-05-06 00:53:54.713594 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.713608 | orchestrator | 2025-05-06 00:53:54.713622 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-06 00:53:54.713635 | orchestrator | Tuesday 06 May 2025 00:50:26 +0000 (0:00:01.198) 0:03:28.797 *********** 2025-05-06 00:53:54.713650 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-06 00:53:54.713664 | orchestrator | 2025-05-06 00:53:54.713677 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-06 00:53:54.713691 | orchestrator | Tuesday 06 May 2025 00:50:30 +0000 (0:00:03.430) 0:03:32.228 *********** 2025-05-06 00:53:54.713707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-06 00:53:54.713742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-06 00:53:54.713797 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.713813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-06 00:53:54.713830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-06 00:53:54.713845 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.713868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-06 00:53:54.713901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-06 00:53:54.713977 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.713998 | orchestrator | 2025-05-06 00:53:54.714044 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-06 00:53:54.714061 | orchestrator | Tuesday 06 May 2025 00:50:33 +0000 (0:00:02.930) 0:03:35.158 *********** 2025-05-06 00:53:54.714076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-06 00:53:54.714110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-06 00:53:54.714126 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.714141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-06 00:53:54.714158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-06 00:53:54.714173 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.714195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-06 00:53:54.714218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-06 00:53:54.714234 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.714248 | orchestrator | 2025-05-06 00:53:54.714262 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-06 00:53:54.714279 | orchestrator | Tuesday 06 May 2025 00:50:36 +0000 (0:00:03.110) 0:03:38.269 *********** 2025-05-06 00:53:54.714308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-06 00:53:54.714335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-06 00:53:54.714358 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.714373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-06 00:53:54.714394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-06 00:53:54.714409 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.714440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-06 00:53:54.714456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-06 00:53:54.714473 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.714496 | orchestrator | 2025-05-06 00:53:54.714519 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-06 00:53:54.714537 | orchestrator | Tuesday 06 May 2025 00:50:39 +0000 (0:00:03.178) 0:03:41.447 *********** 2025-05-06 00:53:54.714551 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.714563 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.714576 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.714588 | orchestrator | 2025-05-06 00:53:54.714601 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-06 00:53:54.714613 | orchestrator | Tuesday 06 May 2025 00:50:41 +0000 (0:00:02.129) 0:03:43.577 *********** 2025-05-06 00:53:54.714625 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.714638 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.714650 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.714662 | orchestrator | 2025-05-06 00:53:54.714675 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-06 00:53:54.714687 | orchestrator | Tuesday 06 May 2025 00:50:43 +0000 (0:00:01.800) 0:03:45.377 *********** 2025-05-06 00:53:54.714700 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.714713 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.714725 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.714737 | orchestrator | 2025-05-06 00:53:54.714750 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-06 00:53:54.714762 | orchestrator | Tuesday 06 May 2025 00:50:43 +0000 (0:00:00.278) 0:03:45.656 *********** 2025-05-06 00:53:54.714774 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.714787 | orchestrator | 2025-05-06 00:53:54.714799 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-06 00:53:54.714811 | orchestrator | Tuesday 06 May 2025 00:50:45 +0000 (0:00:01.405) 0:03:47.061 *********** 2025-05-06 00:53:54.714824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-06 00:53:54.714845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-06 00:53:54.714866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-06 00:53:54.714879 | orchestrator | 2025-05-06 00:53:54.714892 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-06 00:53:54.714904 | orchestrator | Tuesday 06 May 2025 00:50:46 +0000 (0:00:01.621) 0:03:48.683 *********** 2025-05-06 00:53:54.714940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-06 00:53:54.714958 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.714972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-06 00:53:54.714996 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.715009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-06 00:53:54.715059 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.715074 | orchestrator | 2025-05-06 00:53:54.715087 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-06 00:53:54.715100 | orchestrator | Tuesday 06 May 2025 00:50:47 +0000 (0:00:00.570) 0:03:49.254 *********** 2025-05-06 00:53:54.715112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-06 00:53:54.715125 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.715138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-06 00:53:54.715151 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.715163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-06 00:53:54.715176 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.715189 | orchestrator | 2025-05-06 00:53:54.715208 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-06 00:53:54.715221 | orchestrator | Tuesday 06 May 2025 00:50:48 +0000 (0:00:00.729) 0:03:49.983 *********** 2025-05-06 00:53:54.715233 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.715246 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.715259 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.715271 | orchestrator | 2025-05-06 00:53:54.715284 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-06 00:53:54.715296 | orchestrator | Tuesday 06 May 2025 00:50:48 +0000 (0:00:00.650) 0:03:50.633 *********** 2025-05-06 00:53:54.715308 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.715321 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.715333 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.715346 | orchestrator | 2025-05-06 00:53:54.715358 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-06 00:53:54.715371 | orchestrator | Tuesday 06 May 2025 00:50:50 +0000 (0:00:01.431) 0:03:52.064 *********** 2025-05-06 00:53:54.715383 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.715396 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.715408 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.715421 | orchestrator | 2025-05-06 00:53:54.715434 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-06 00:53:54.715446 | orchestrator | Tuesday 06 May 2025 00:50:50 +0000 (0:00:00.289) 0:03:52.354 *********** 2025-05-06 00:53:54.715459 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.715481 | orchestrator | 2025-05-06 00:53:54.715494 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-06 00:53:54.715507 | orchestrator | Tuesday 06 May 2025 00:50:51 +0000 (0:00:01.472) 0:03:53.827 *********** 2025-05-06 00:53:54.715535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 00:53:54.715551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.715565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.715585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.715608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 00:53:54.715628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.715646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 00:53:54.715660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 00:53:54.715673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.715692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 00:53:54.715705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.715725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.715739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 00:53:54.715761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 00:53:54.715775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.715793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 00:53:54.715807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.715825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 00:53:54.715846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.715860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.715873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.715892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 00:53:54.715911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.715953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 00:53:54.715977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 00:53:54.716010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 00:53:54.716045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.716079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 00:53:54.716092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 00:53:54.716128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 00:53:54.716147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 00:53:54.716179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 00:53:54.716255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 00:53:54.716282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 00:53:54.716304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 00:53:54.716331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.716369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 00:53:54.716382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 00:53:54.716417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 00:53:54.716430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716449 | orchestrator | 2025-05-06 00:53:54.716462 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-06 00:53:54.716475 | orchestrator | Tuesday 06 May 2025 00:50:56 +0000 (0:00:05.004) 0:03:58.831 *********** 2025-05-06 00:53:54.716494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 00:53:54.716516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 00:53:54.716530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 00:53:54.716642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 00:53:54.716666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 00:53:54.716714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 00:53:54.716727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 00:53:54.716740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 00:53:54.716753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 00:53:54.716813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 00:53:54.716826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.716858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.716872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.716895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 00:53:54.717129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 00:53:54.717157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.717184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.717198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 00:53:54.717236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 00:53:54.717251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 00:53:54.717272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 00:53:54.717286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.717299 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.717312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.717331 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.717344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 00:53:54.717364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.717378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.717399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.717412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 00:53:54.717433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.717447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 00:53:54.717465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 00:53:54.717478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.717499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 00:53:54.717513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.717532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.717545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 00:53:54.717557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.717585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 00:53:54.717599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 00:53:54.717613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.717630 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.717643 | orchestrator | 2025-05-06 00:53:54.717685 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-06 00:53:54.717704 | orchestrator | Tuesday 06 May 2025 00:50:58 +0000 (0:00:01.777) 0:04:00.608 *********** 2025-05-06 00:53:54.717718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-06 00:53:54.717733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-06 00:53:54.717748 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.717770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-06 00:53:54.717785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-06 00:53:54.717799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-06 00:53:54.717813 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.717828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-06 00:53:54.717842 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.717856 | orchestrator | 2025-05-06 00:53:54.717870 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-06 00:53:54.717884 | orchestrator | Tuesday 06 May 2025 00:51:00 +0000 (0:00:01.775) 0:04:02.384 *********** 2025-05-06 00:53:54.717898 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.717912 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.718000 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.718058 | orchestrator | 2025-05-06 00:53:54.718076 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-06 00:53:54.718091 | orchestrator | Tuesday 06 May 2025 00:51:01 +0000 (0:00:01.370) 0:04:03.754 *********** 2025-05-06 00:53:54.718104 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.718117 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.718128 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.718139 | orchestrator | 2025-05-06 00:53:54.718149 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-06 00:53:54.718159 | orchestrator | Tuesday 06 May 2025 00:51:04 +0000 (0:00:02.319) 0:04:06.074 *********** 2025-05-06 00:53:54.718170 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.718180 | orchestrator | 2025-05-06 00:53:54.718190 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-06 00:53:54.718200 | orchestrator | Tuesday 06 May 2025 00:51:05 +0000 (0:00:01.559) 0:04:07.634 *********** 2025-05-06 00:53:54.718211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.718229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.718250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.718261 | orchestrator | 2025-05-06 00:53:54.718271 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-06 00:53:54.718281 | orchestrator | Tuesday 06 May 2025 00:51:09 +0000 (0:00:03.916) 0:04:11.550 *********** 2025-05-06 00:53:54.718303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.718315 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.718325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.718340 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.718351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.718362 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.718372 | orchestrator | 2025-05-06 00:53:54.718382 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-06 00:53:54.718392 | orchestrator | Tuesday 06 May 2025 00:51:10 +0000 (0:00:00.490) 0:04:12.041 *********** 2025-05-06 00:53:54.718403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-06 00:53:54.718413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-06 00:53:54.718424 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.718434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-06 00:53:54.718444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-06 00:53:54.718455 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.718465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-06 00:53:54.718475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-06 00:53:54.718486 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.718496 | orchestrator | 2025-05-06 00:53:54.718506 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-06 00:53:54.718521 | orchestrator | Tuesday 06 May 2025 00:51:11 +0000 (0:00:01.199) 0:04:13.240 *********** 2025-05-06 00:53:54.718531 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.718541 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.718556 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.718567 | orchestrator | 2025-05-06 00:53:54.718577 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-06 00:53:54.718587 | orchestrator | Tuesday 06 May 2025 00:51:12 +0000 (0:00:01.184) 0:04:14.425 *********** 2025-05-06 00:53:54.718597 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.718607 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.718654 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.718666 | orchestrator | 2025-05-06 00:53:54.718699 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-06 00:53:54.718710 | orchestrator | Tuesday 06 May 2025 00:51:14 +0000 (0:00:02.358) 0:04:16.784 *********** 2025-05-06 00:53:54.718720 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.718730 | orchestrator | 2025-05-06 00:53:54.718741 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-06 00:53:54.718751 | orchestrator | Tuesday 06 May 2025 00:51:16 +0000 (0:00:01.599) 0:04:18.384 *********** 2025-05-06 00:53:54.718773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.718785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.718795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.718833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.718861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.718872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.718883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.718894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.718909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.718948 | orchestrator | 2025-05-06 00:53:54.718969 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-06 00:53:54.718986 | orchestrator | Tuesday 06 May 2025 00:51:21 +0000 (0:00:04.968) 0:04:23.352 *********** 2025-05-06 00:53:54.719005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.719017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.719028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.719038 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.719049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.719078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.719090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.719101 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.719111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.719122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.719133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.719149 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.719159 | orchestrator | 2025-05-06 00:53:54.719169 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-06 00:53:54.719180 | orchestrator | Tuesday 06 May 2025 00:51:22 +0000 (0:00:00.760) 0:04:24.113 *********** 2025-05-06 00:53:54.719190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-06 00:53:54.719205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-06 00:53:54.719216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-06 00:53:54.719226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-06 00:53:54.719237 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.719247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-06 00:53:54.719257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-06 00:53:54.719268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-06 00:53:54.719279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-06 00:53:54.719290 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.719300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-06 00:53:54.719311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-06 00:53:54.719321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-06 00:53:54.719331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-06 00:53:54.719342 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.719352 | orchestrator | 2025-05-06 00:53:54.719362 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-06 00:53:54.719377 | orchestrator | Tuesday 06 May 2025 00:51:23 +0000 (0:00:00.974) 0:04:25.087 *********** 2025-05-06 00:53:54.719388 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.719398 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.719409 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.719419 | orchestrator | 2025-05-06 00:53:54.719429 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-06 00:53:54.719439 | orchestrator | Tuesday 06 May 2025 00:51:24 +0000 (0:00:01.277) 0:04:26.365 *********** 2025-05-06 00:53:54.719449 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.719459 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.719469 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.719479 | orchestrator | 2025-05-06 00:53:54.719490 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-06 00:53:54.719500 | orchestrator | Tuesday 06 May 2025 00:51:26 +0000 (0:00:02.343) 0:04:28.708 *********** 2025-05-06 00:53:54.719510 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.719520 | orchestrator | 2025-05-06 00:53:54.719534 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-06 00:53:54.719544 | orchestrator | Tuesday 06 May 2025 00:51:28 +0000 (0:00:01.601) 0:04:30.309 *********** 2025-05-06 00:53:54.719554 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-06 00:53:54.719566 | orchestrator | 2025-05-06 00:53:54.719576 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-06 00:53:54.719586 | orchestrator | Tuesday 06 May 2025 00:51:29 +0000 (0:00:01.202) 0:04:31.512 *********** 2025-05-06 00:53:54.719601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-06 00:53:54.719619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-06 00:53:54.719631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-06 00:53:54.719641 | orchestrator | 2025-05-06 00:53:54.719652 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-06 00:53:54.719662 | orchestrator | Tuesday 06 May 2025 00:51:34 +0000 (0:00:04.438) 0:04:35.951 *********** 2025-05-06 00:53:54.719673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-06 00:53:54.719689 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.719699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-06 00:53:54.719710 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.719720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-06 00:53:54.719731 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.719741 | orchestrator | 2025-05-06 00:53:54.719751 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-06 00:53:54.719761 | orchestrator | Tuesday 06 May 2025 00:51:35 +0000 (0:00:01.383) 0:04:37.334 *********** 2025-05-06 00:53:54.719771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-06 00:53:54.719782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-06 00:53:54.719793 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.719803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-06 00:53:54.719820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-06 00:53:54.719831 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.719842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-06 00:53:54.719852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-06 00:53:54.719862 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.719873 | orchestrator | 2025-05-06 00:53:54.719883 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-06 00:53:54.719893 | orchestrator | Tuesday 06 May 2025 00:51:36 +0000 (0:00:01.447) 0:04:38.781 *********** 2025-05-06 00:53:54.719903 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.719914 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.719941 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.719952 | orchestrator | 2025-05-06 00:53:54.719962 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-06 00:53:54.719988 | orchestrator | Tuesday 06 May 2025 00:51:39 +0000 (0:00:02.353) 0:04:41.135 *********** 2025-05-06 00:53:54.719998 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.720008 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.720018 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.720028 | orchestrator | 2025-05-06 00:53:54.720038 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-06 00:53:54.720048 | orchestrator | Tuesday 06 May 2025 00:51:42 +0000 (0:00:03.501) 0:04:44.636 *********** 2025-05-06 00:53:54.720059 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-06 00:53:54.720069 | orchestrator | 2025-05-06 00:53:54.720079 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-06 00:53:54.720089 | orchestrator | Tuesday 06 May 2025 00:51:43 +0000 (0:00:01.202) 0:04:45.839 *********** 2025-05-06 00:53:54.720100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-06 00:53:54.720110 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.720121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-06 00:53:54.720131 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.720141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-06 00:53:54.720152 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.720162 | orchestrator | 2025-05-06 00:53:54.720172 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-06 00:53:54.720183 | orchestrator | Tuesday 06 May 2025 00:51:45 +0000 (0:00:01.618) 0:04:47.457 *********** 2025-05-06 00:53:54.720206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-06 00:53:54.720218 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.720228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-06 00:53:54.720244 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.720254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-06 00:53:54.720265 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.720275 | orchestrator | 2025-05-06 00:53:54.720285 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-06 00:53:54.720295 | orchestrator | Tuesday 06 May 2025 00:51:47 +0000 (0:00:01.811) 0:04:49.269 *********** 2025-05-06 00:53:54.720305 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.720322 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.720332 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.720342 | orchestrator | 2025-05-06 00:53:54.720353 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-06 00:53:54.720363 | orchestrator | Tuesday 06 May 2025 00:51:49 +0000 (0:00:02.005) 0:04:51.275 *********** 2025-05-06 00:53:54.720373 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:53:54.720419 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:53:54.720431 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:53:54.720442 | orchestrator | 2025-05-06 00:53:54.720452 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-06 00:53:54.720462 | orchestrator | Tuesday 06 May 2025 00:51:52 +0000 (0:00:02.769) 0:04:54.044 *********** 2025-05-06 00:53:54.720472 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:53:54.720482 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:53:54.720492 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:53:54.720502 | orchestrator | 2025-05-06 00:53:54.720513 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-06 00:53:54.720523 | orchestrator | Tuesday 06 May 2025 00:51:54 +0000 (0:00:02.732) 0:04:56.776 *********** 2025-05-06 00:53:54.720533 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-05-06 00:53:54.720544 | orchestrator | 2025-05-06 00:53:54.720554 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-06 00:53:54.720564 | orchestrator | Tuesday 06 May 2025 00:51:56 +0000 (0:00:01.072) 0:04:57.849 *********** 2025-05-06 00:53:54.720574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-06 00:53:54.720585 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.720595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-06 00:53:54.720611 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.720626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-06 00:53:54.720637 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.720647 | orchestrator | 2025-05-06 00:53:54.720658 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-06 00:53:54.720668 | orchestrator | Tuesday 06 May 2025 00:51:57 +0000 (0:00:01.297) 0:04:59.147 *********** 2025-05-06 00:53:54.720685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-06 00:53:54.720697 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.720707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-06 00:53:54.720717 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.720728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-06 00:53:54.720738 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.720749 | orchestrator | 2025-05-06 00:53:54.720759 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-06 00:53:54.720769 | orchestrator | Tuesday 06 May 2025 00:51:58 +0000 (0:00:01.157) 0:05:00.305 *********** 2025-05-06 00:53:54.720780 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.720790 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.720800 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.720810 | orchestrator | 2025-05-06 00:53:54.720820 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-06 00:53:54.720830 | orchestrator | Tuesday 06 May 2025 00:52:00 +0000 (0:00:01.763) 0:05:02.068 *********** 2025-05-06 00:53:54.720840 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:53:54.720850 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:53:54.720860 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:53:54.720870 | orchestrator | 2025-05-06 00:53:54.720881 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-06 00:53:54.720894 | orchestrator | Tuesday 06 May 2025 00:52:03 +0000 (0:00:02.923) 0:05:04.991 *********** 2025-05-06 00:53:54.720910 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:53:54.720968 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:53:54.720980 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:53:54.720990 | orchestrator | 2025-05-06 00:53:54.721000 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-06 00:53:54.721010 | orchestrator | Tuesday 06 May 2025 00:52:06 +0000 (0:00:03.315) 0:05:08.307 *********** 2025-05-06 00:53:54.721021 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.721031 | orchestrator | 2025-05-06 00:53:54.721041 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-06 00:53:54.721051 | orchestrator | Tuesday 06 May 2025 00:52:08 +0000 (0:00:01.647) 0:05:09.954 *********** 2025-05-06 00:53:54.721066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.721078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-06 00:53:54.721089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-06 00:53:54.721100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-06 00:53:54.721119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.721136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.721147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-06 00:53:54.721162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-06 00:53:54.721173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-06 00:53:54.721184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.721201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.721216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-06 00:53:54.721225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-06 00:53:54.721239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-06 00:53:54.721248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.721258 | orchestrator | 2025-05-06 00:53:54.721266 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-06 00:53:54.721275 | orchestrator | Tuesday 06 May 2025 00:52:12 +0000 (0:00:04.504) 0:05:14.459 *********** 2025-05-06 00:53:54.721290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.721304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-06 00:53:54.721313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-06 00:53:54.721322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-06 00:53:54.721335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.721344 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.721359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.721369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-06 00:53:54.721382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-06 00:53:54.721391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-06 00:53:54.721400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.721409 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.721428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.721438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-06 00:53:54.721447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-06 00:53:54.721460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-06 00:53:54.721469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-06 00:53:54.721478 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.721487 | orchestrator | 2025-05-06 00:53:54.721496 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-06 00:53:54.721505 | orchestrator | Tuesday 06 May 2025 00:52:13 +0000 (0:00:00.935) 0:05:15.395 *********** 2025-05-06 00:53:54.721514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-06 00:53:54.721523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-06 00:53:54.721531 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.721540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-06 00:53:54.721549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-06 00:53:54.721558 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.721570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-06 00:53:54.721633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-06 00:53:54.721643 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.721652 | orchestrator | 2025-05-06 00:53:54.721660 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-06 00:53:54.721669 | orchestrator | Tuesday 06 May 2025 00:52:14 +0000 (0:00:01.279) 0:05:16.674 *********** 2025-05-06 00:53:54.721677 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.721686 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.721694 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.721703 | orchestrator | 2025-05-06 00:53:54.721711 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-06 00:53:54.721726 | orchestrator | Tuesday 06 May 2025 00:52:16 +0000 (0:00:01.506) 0:05:18.181 *********** 2025-05-06 00:53:54.721735 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.721743 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.721752 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.721760 | orchestrator | 2025-05-06 00:53:54.721769 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-06 00:53:54.721778 | orchestrator | Tuesday 06 May 2025 00:52:18 +0000 (0:00:02.320) 0:05:20.501 *********** 2025-05-06 00:53:54.721786 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.721795 | orchestrator | 2025-05-06 00:53:54.721803 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-06 00:53:54.721812 | orchestrator | Tuesday 06 May 2025 00:52:20 +0000 (0:00:01.702) 0:05:22.204 *********** 2025-05-06 00:53:54.721820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-06 00:53:54.721836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-06 00:53:54.721846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-06 00:53:54.721879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-06 00:53:54.721894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-06 00:53:54.721910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-06 00:53:54.721938 | orchestrator | 2025-05-06 00:53:54.721948 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-06 00:53:54.721957 | orchestrator | Tuesday 06 May 2025 00:52:26 +0000 (0:00:06.138) 0:05:28.342 *********** 2025-05-06 00:53:54.721987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-06 00:53:54.721998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-06 00:53:54.722039 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.722050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-06 00:53:54.722060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-06 00:53:54.722069 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.722078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-06 00:53:54.722109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-06 00:53:54.722131 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.722140 | orchestrator | 2025-05-06 00:53:54.722149 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-06 00:53:54.722157 | orchestrator | Tuesday 06 May 2025 00:52:27 +0000 (0:00:00.834) 0:05:29.177 *********** 2025-05-06 00:53:54.722166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-06 00:53:54.722175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-06 00:53:54.722184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-06 00:53:54.722192 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.722201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-06 00:53:54.722210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-06 00:53:54.722218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-06 00:53:54.722227 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.722239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-06 00:53:54.722248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-06 00:53:54.722257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-06 00:53:54.722265 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.722274 | orchestrator | 2025-05-06 00:53:54.722282 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-06 00:53:54.722291 | orchestrator | Tuesday 06 May 2025 00:52:28 +0000 (0:00:01.335) 0:05:30.513 *********** 2025-05-06 00:53:54.722300 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.722312 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.722320 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.722329 | orchestrator | 2025-05-06 00:53:54.722337 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-06 00:53:54.722346 | orchestrator | Tuesday 06 May 2025 00:52:29 +0000 (0:00:00.414) 0:05:30.928 *********** 2025-05-06 00:53:54.722354 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.722363 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.722371 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.722380 | orchestrator | 2025-05-06 00:53:54.722388 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-06 00:53:54.722397 | orchestrator | Tuesday 06 May 2025 00:52:30 +0000 (0:00:01.619) 0:05:32.547 *********** 2025-05-06 00:53:54.722423 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.722434 | orchestrator | 2025-05-06 00:53:54.722442 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-06 00:53:54.722451 | orchestrator | Tuesday 06 May 2025 00:52:32 +0000 (0:00:01.774) 0:05:34.322 *********** 2025-05-06 00:53:54.722460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-06 00:53:54.722469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-06 00:53:54.722479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.722489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.722498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-06 00:53:54.722512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-06 00:53:54.722541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-06 00:53:54.722559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.722569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.722578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-06 00:53:54.722587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-06 00:53:54.722596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-06 00:53:54.722609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.722642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.722653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-06 00:53:54.722662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-06 00:53:54.722671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 00:53:54.722687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.722696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.722724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 00:53:54.722740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.722750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-06 00:53:54.722760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 00:53:54.722773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.722782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.722796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 00:53:54.722855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.722866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-06 00:53:54.722876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 00:53:54.722889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.722899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.722928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 00:53:54.722952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.722967 | orchestrator | 2025-05-06 00:53:54.722980 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-06 00:53:54.722988 | orchestrator | Tuesday 06 May 2025 00:52:37 +0000 (0:00:05.011) 0:05:39.334 *********** 2025-05-06 00:53:54.722998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 00:53:54.723007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-06 00:53:54.723016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.723030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.723045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-06 00:53:54.723058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 00:53:54.723068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 00:53:54.723077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.723089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.723104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 00:53:54.723114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.723122 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.723135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 00:53:54.723144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-06 00:53:54.723153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.723162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.723180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-06 00:53:54.723190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 00:53:54.723202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 00:53:54.723211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.723220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.723230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 00:53:54.723248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 00:53:54.723258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.723266 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.723275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-06 00:53:54.723284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.723297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.723306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-06 00:53:54.723320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 00:53:54.723337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 00:53:54.723346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.723355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.723367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 00:53:54.723377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 00:53:54.723385 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.723394 | orchestrator | 2025-05-06 00:53:54.723406 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-06 00:53:54.723415 | orchestrator | Tuesday 06 May 2025 00:52:39 +0000 (0:00:01.510) 0:05:40.844 *********** 2025-05-06 00:53:54.723424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-06 00:53:54.723433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-06 00:53:54.723442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-06 00:53:54.723451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-06 00:53:54.723461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-06 00:53:54.723470 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.723482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-06 00:53:54.723492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-06 00:53:54.723501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-06 00:53:54.723509 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.723518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-06 00:53:54.723529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-06 00:53:54.723539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-06 00:53:54.723551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-06 00:53:54.723560 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.723568 | orchestrator | 2025-05-06 00:53:54.723577 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-06 00:53:54.723586 | orchestrator | Tuesday 06 May 2025 00:52:40 +0000 (0:00:01.647) 0:05:42.492 *********** 2025-05-06 00:53:54.723599 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.723607 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.723616 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.723625 | orchestrator | 2025-05-06 00:53:54.723633 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-06 00:53:54.723642 | orchestrator | Tuesday 06 May 2025 00:52:41 +0000 (0:00:00.772) 0:05:43.264 *********** 2025-05-06 00:53:54.723650 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.723659 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.723667 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.723676 | orchestrator | 2025-05-06 00:53:54.723684 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-06 00:53:54.723693 | orchestrator | Tuesday 06 May 2025 00:52:43 +0000 (0:00:01.979) 0:05:45.243 *********** 2025-05-06 00:53:54.723701 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.723710 | orchestrator | 2025-05-06 00:53:54.723719 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-06 00:53:54.723727 | orchestrator | Tuesday 06 May 2025 00:52:45 +0000 (0:00:01.855) 0:05:47.098 *********** 2025-05-06 00:53:54.723742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-06 00:53:54.723752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-06 00:53:54.723765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-06 00:53:54.723784 | orchestrator | 2025-05-06 00:53:54.723793 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-06 00:53:54.723802 | orchestrator | Tuesday 06 May 2025 00:52:48 +0000 (0:00:02.803) 0:05:49.902 *********** 2025-05-06 00:53:54.723810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-06 00:53:54.723819 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.723828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-06 00:53:54.723903 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.723913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-06 00:53:54.723969 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.723981 | orchestrator | 2025-05-06 00:53:54.723990 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-06 00:53:54.723998 | orchestrator | Tuesday 06 May 2025 00:52:48 +0000 (0:00:00.661) 0:05:50.564 *********** 2025-05-06 00:53:54.724006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-06 00:53:54.724020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-06 00:53:54.724028 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.724036 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.724048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-06 00:53:54.724057 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.724065 | orchestrator | 2025-05-06 00:53:54.724073 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-06 00:53:54.724081 | orchestrator | Tuesday 06 May 2025 00:52:49 +0000 (0:00:01.077) 0:05:51.641 *********** 2025-05-06 00:53:54.724089 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.724096 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.724104 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.724112 | orchestrator | 2025-05-06 00:53:54.724120 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-06 00:53:54.724128 | orchestrator | Tuesday 06 May 2025 00:52:50 +0000 (0:00:00.442) 0:05:52.084 *********** 2025-05-06 00:53:54.724136 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.724144 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.724152 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.724160 | orchestrator | 2025-05-06 00:53:54.724168 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-06 00:53:54.724176 | orchestrator | Tuesday 06 May 2025 00:52:51 +0000 (0:00:01.718) 0:05:53.803 *********** 2025-05-06 00:53:54.724184 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:53:54.724192 | orchestrator | 2025-05-06 00:53:54.724200 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-06 00:53:54.724208 | orchestrator | Tuesday 06 May 2025 00:52:53 +0000 (0:00:01.931) 0:05:55.735 *********** 2025-05-06 00:53:54.724216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.724225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.724238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.724251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.724260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.724268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-06 00:53:54.724276 | orchestrator | 2025-05-06 00:53:54.724285 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-06 00:53:54.724297 | orchestrator | Tuesday 06 May 2025 00:53:00 +0000 (0:00:06.633) 0:06:02.368 *********** 2025-05-06 00:53:54.724305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.724317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.724325 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.724333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.724342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.724350 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.724361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.724376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-06 00:53:54.724385 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.724393 | orchestrator | 2025-05-06 00:53:54.724401 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-06 00:53:54.724409 | orchestrator | Tuesday 06 May 2025 00:53:02 +0000 (0:00:01.569) 0:06:03.938 *********** 2025-05-06 00:53:54.724417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-06 00:53:54.724425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-06 00:53:54.724433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-06 00:53:54.724441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-06 00:53:54.724449 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.724458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-06 00:53:54.724465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-06 00:53:54.724473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-06 00:53:54.724486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-06 00:53:54.724494 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.724502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-06 00:53:54.724510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-06 00:53:54.724518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-06 00:53:54.724526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-06 00:53:54.724534 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.724542 | orchestrator | 2025-05-06 00:53:54.724550 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-06 00:53:54.724557 | orchestrator | Tuesday 06 May 2025 00:53:03 +0000 (0:00:01.634) 0:06:05.573 *********** 2025-05-06 00:53:54.724565 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.724573 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.724581 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.724589 | orchestrator | 2025-05-06 00:53:54.724597 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-06 00:53:54.724608 | orchestrator | Tuesday 06 May 2025 00:53:05 +0000 (0:00:01.686) 0:06:07.260 *********** 2025-05-06 00:53:54.724616 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.724624 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.724632 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.724640 | orchestrator | 2025-05-06 00:53:54.724648 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-06 00:53:54.724656 | orchestrator | Tuesday 06 May 2025 00:53:07 +0000 (0:00:02.389) 0:06:09.649 *********** 2025-05-06 00:53:54.724664 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.724671 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.724683 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.724691 | orchestrator | 2025-05-06 00:53:54.724699 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-06 00:53:54.724707 | orchestrator | Tuesday 06 May 2025 00:53:08 +0000 (0:00:00.288) 0:06:09.937 *********** 2025-05-06 00:53:54.724715 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.724723 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.724730 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.724738 | orchestrator | 2025-05-06 00:53:54.724746 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-06 00:53:54.724754 | orchestrator | Tuesday 06 May 2025 00:53:08 +0000 (0:00:00.555) 0:06:10.493 *********** 2025-05-06 00:53:54.724762 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.724770 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.724778 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.724786 | orchestrator | 2025-05-06 00:53:54.724794 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-06 00:53:54.724802 | orchestrator | Tuesday 06 May 2025 00:53:09 +0000 (0:00:00.555) 0:06:11.048 *********** 2025-05-06 00:53:54.724810 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.724817 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.724831 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.724839 | orchestrator | 2025-05-06 00:53:54.724847 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-06 00:53:54.724855 | orchestrator | Tuesday 06 May 2025 00:53:09 +0000 (0:00:00.645) 0:06:11.694 *********** 2025-05-06 00:53:54.724863 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.724870 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.724878 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.724886 | orchestrator | 2025-05-06 00:53:54.724894 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-06 00:53:54.724902 | orchestrator | Tuesday 06 May 2025 00:53:10 +0000 (0:00:00.331) 0:06:12.026 *********** 2025-05-06 00:53:54.724910 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.724934 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.724944 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.724952 | orchestrator | 2025-05-06 00:53:54.724960 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-06 00:53:54.724968 | orchestrator | Tuesday 06 May 2025 00:53:11 +0000 (0:00:01.042) 0:06:13.068 *********** 2025-05-06 00:53:54.724975 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:53:54.724984 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:53:54.724996 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:53:54.725005 | orchestrator | 2025-05-06 00:53:54.725013 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-06 00:53:54.725021 | orchestrator | Tuesday 06 May 2025 00:53:12 +0000 (0:00:00.890) 0:06:13.959 *********** 2025-05-06 00:53:54.725029 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:53:54.725037 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:53:54.725045 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:53:54.725053 | orchestrator | 2025-05-06 00:53:54.725061 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-06 00:53:54.725069 | orchestrator | Tuesday 06 May 2025 00:53:12 +0000 (0:00:00.332) 0:06:14.291 *********** 2025-05-06 00:53:54.725077 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:53:54.725085 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:53:54.725092 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:53:54.725100 | orchestrator | 2025-05-06 00:53:54.725108 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-06 00:53:54.725116 | orchestrator | Tuesday 06 May 2025 00:53:13 +0000 (0:00:01.242) 0:06:15.533 *********** 2025-05-06 00:53:54.725124 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:53:54.725132 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:53:54.725139 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:53:54.725147 | orchestrator | 2025-05-06 00:53:54.725155 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-06 00:53:54.725163 | orchestrator | Tuesday 06 May 2025 00:53:14 +0000 (0:00:01.225) 0:06:16.759 *********** 2025-05-06 00:53:54.725171 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:53:54.725179 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:53:54.725186 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:53:54.725194 | orchestrator | 2025-05-06 00:53:54.725202 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-06 00:53:54.725210 | orchestrator | Tuesday 06 May 2025 00:53:15 +0000 (0:00:00.940) 0:06:17.699 *********** 2025-05-06 00:53:54.725218 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.725226 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.725233 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.725241 | orchestrator | 2025-05-06 00:53:54.725249 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-06 00:53:54.725257 | orchestrator | Tuesday 06 May 2025 00:53:26 +0000 (0:00:10.479) 0:06:28.179 *********** 2025-05-06 00:53:54.725265 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:53:54.725273 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:53:54.725281 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:53:54.725288 | orchestrator | 2025-05-06 00:53:54.725296 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-06 00:53:54.725309 | orchestrator | Tuesday 06 May 2025 00:53:27 +0000 (0:00:00.994) 0:06:29.174 *********** 2025-05-06 00:53:54.725317 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.725325 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.725333 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.725340 | orchestrator | 2025-05-06 00:53:54.725348 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-06 00:53:54.725356 | orchestrator | Tuesday 06 May 2025 00:53:34 +0000 (0:00:06.881) 0:06:36.055 *********** 2025-05-06 00:53:54.725364 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:53:54.725372 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:53:54.725379 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:53:54.725387 | orchestrator | 2025-05-06 00:53:54.725395 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-06 00:53:54.725406 | orchestrator | Tuesday 06 May 2025 00:53:37 +0000 (0:00:03.706) 0:06:39.762 *********** 2025-05-06 00:53:54.725414 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:53:54.725422 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:53:54.725430 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:53:54.725437 | orchestrator | 2025-05-06 00:53:54.725450 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-06 00:53:54.725458 | orchestrator | Tuesday 06 May 2025 00:53:43 +0000 (0:00:05.293) 0:06:45.055 *********** 2025-05-06 00:53:54.725466 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.725474 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.725482 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.725490 | orchestrator | 2025-05-06 00:53:54.725498 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-06 00:53:54.725505 | orchestrator | Tuesday 06 May 2025 00:53:43 +0000 (0:00:00.574) 0:06:45.629 *********** 2025-05-06 00:53:54.725513 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.725521 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.725529 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.725536 | orchestrator | 2025-05-06 00:53:54.725545 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-06 00:53:54.725552 | orchestrator | Tuesday 06 May 2025 00:53:44 +0000 (0:00:00.570) 0:06:46.200 *********** 2025-05-06 00:53:54.725560 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.725568 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.725576 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.725584 | orchestrator | 2025-05-06 00:53:54.725592 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-06 00:53:54.725600 | orchestrator | Tuesday 06 May 2025 00:53:44 +0000 (0:00:00.334) 0:06:46.534 *********** 2025-05-06 00:53:54.725608 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.725616 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.725624 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.725631 | orchestrator | 2025-05-06 00:53:54.725639 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-06 00:53:54.725647 | orchestrator | Tuesday 06 May 2025 00:53:45 +0000 (0:00:00.604) 0:06:47.138 *********** 2025-05-06 00:53:54.725655 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.725663 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.725671 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.725679 | orchestrator | 2025-05-06 00:53:54.725686 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-06 00:53:54.725694 | orchestrator | Tuesday 06 May 2025 00:53:45 +0000 (0:00:00.576) 0:06:47.714 *********** 2025-05-06 00:53:54.725702 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:53:54.725710 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:53:54.725718 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:53:54.725726 | orchestrator | 2025-05-06 00:53:54.725734 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-06 00:53:54.725745 | orchestrator | Tuesday 06 May 2025 00:53:46 +0000 (0:00:00.309) 0:06:48.024 *********** 2025-05-06 00:53:54.725754 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:53:54.725761 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:53:54.725769 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:53:54.725777 | orchestrator | 2025-05-06 00:53:54.725785 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-06 00:53:54.725793 | orchestrator | Tuesday 06 May 2025 00:53:49 +0000 (0:00:03.743) 0:06:51.767 *********** 2025-05-06 00:53:54.725801 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:53:54.725812 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:53:54.725820 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:53:54.725827 | orchestrator | 2025-05-06 00:53:54.725835 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:53:54.725843 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-06 00:53:54.725856 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-06 00:53:54.725870 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-06 00:53:54.725885 | orchestrator | 2025-05-06 00:53:54.725898 | orchestrator | 2025-05-06 00:53:54.725911 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 00:53:54.725962 | orchestrator | Tuesday 06 May 2025 00:53:51 +0000 (0:00:01.131) 0:06:52.899 *********** 2025-05-06 00:53:54.725976 | orchestrator | =============================================================================== 2025-05-06 00:53:54.725989 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.48s 2025-05-06 00:53:54.726002 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 7.47s 2025-05-06 00:53:54.726040 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 6.88s 2025-05-06 00:53:54.726051 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.63s 2025-05-06 00:53:54.726059 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.14s 2025-05-06 00:53:54.726067 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.80s 2025-05-06 00:53:54.726075 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 5.29s 2025-05-06 00:53:54.726083 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.01s 2025-05-06 00:53:54.726090 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.00s 2025-05-06 00:53:54.726099 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.97s 2025-05-06 00:53:54.726107 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.86s 2025-05-06 00:53:54.726118 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.85s 2025-05-06 00:53:54.726125 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.78s 2025-05-06 00:53:54.726132 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.67s 2025-05-06 00:53:54.726144 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.55s 2025-05-06 00:53:57.729224 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 4.50s 2025-05-06 00:53:57.729334 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.44s 2025-05-06 00:53:57.729354 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.44s 2025-05-06 00:53:57.729369 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.17s 2025-05-06 00:53:57.729383 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 4.13s 2025-05-06 00:53:57.729398 | orchestrator | 2025-05-06 00:53:54 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:53:57.729440 | orchestrator | 2025-05-06 00:53:54 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:53:57.729455 | orchestrator | 2025-05-06 00:53:54 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:53:57.729469 | orchestrator | 2025-05-06 00:53:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:53:57.729483 | orchestrator | 2025-05-06 00:53:54 | INFO  | Task 48976896-dce3-42de-8dc0-82d23a0bf79b is in state STARTED 2025-05-06 00:53:57.729497 | orchestrator | 2025-05-06 00:53:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:53:57.729527 | orchestrator | 2025-05-06 00:53:57 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:53:57.730113 | orchestrator | 2025-05-06 00:53:57 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:53:57.730218 | orchestrator | 2025-05-06 00:53:57 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:53:57.733657 | orchestrator | 2025-05-06 00:53:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:53:57.735196 | orchestrator | 2025-05-06 00:53:57 | INFO  | Task 48976896-dce3-42de-8dc0-82d23a0bf79b is in state STARTED 2025-05-06 00:54:00.779822 | orchestrator | 2025-05-06 00:53:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:54:00.780093 | orchestrator | 2025-05-06 00:54:00 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:54:00.780564 | orchestrator | 2025-05-06 00:54:00 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:54:00.780597 | orchestrator | 2025-05-06 00:54:00 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:54:00.781280 | orchestrator | 2025-05-06 00:54:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:54:00.782401 | orchestrator | 2025-05-06 00:54:00 | INFO  | Task 48976896-dce3-42de-8dc0-82d23a0bf79b is in state STARTED 2025-05-06 00:54:03.809494 | orchestrator | 2025-05-06 00:54:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:54:03.809729 | orchestrator | 2025-05-06 00:54:03 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:54:03.810443 | orchestrator | 2025-05-06 00:54:03 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:54:03.810482 | orchestrator | 2025-05-06 00:54:03 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:54:03.810847 | orchestrator | 2025-05-06 00:54:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:54:03.811150 | orchestrator | 2025-05-06 00:54:03 | INFO  | Task 48976896-dce3-42de-8dc0-82d23a0bf79b is in state SUCCESS 2025-05-06 00:54:06.844895 | orchestrator | 2025-05-06 00:54:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:54:06.845129 | orchestrator | 2025-05-06 00:54:06 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:54:06.845489 | orchestrator | 2025-05-06 00:54:06 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:54:06.845528 | orchestrator | 2025-05-06 00:54:06 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:54:06.847923 | orchestrator | 2025-05-06 00:54:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:54:09.883139 | orchestrator | 2025-05-06 00:54:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:54:09.883363 | orchestrator | 2025-05-06 00:54:09 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:54:09.884002 | orchestrator | 2025-05-06 00:54:09 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:54:09.885874 | orchestrator | 2025-05-06 00:54:09 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:54:09.886979 | orchestrator | 2025-05-06 00:54:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:54:09.887086 | orchestrator | 2025-05-06 00:54:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:54:12.929205 | orchestrator | 2025-05-06 00:54:12 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:54:12.929836 | orchestrator | 2025-05-06 00:54:12 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:54:12.930445 | orchestrator | 2025-05-06 00:54:12 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:54:12.931553 | orchestrator | 2025-05-06 00:54:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:54:15.966844 | orchestrator | 2025-05-06 00:54:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:54:15.967099 | orchestrator | 2025-05-06 00:54:15 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:54:15.967832 | orchestrator | 2025-05-06 00:54:15 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:54:15.967869 | orchestrator | 2025-05-06 00:54:15 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:54:15.968531 | orchestrator | 2025-05-06 00:54:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:54:19.001254 | orchestrator | 2025-05-06 00:54:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:54:19.001378 | orchestrator | 2025-05-06 00:54:19 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:54:19.001831 | orchestrator | 2025-05-06 00:54:19 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:54:19.001870 | orchestrator | 2025-05-06 00:54:19 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:54:19.002479 | orchestrator | 2025-05-06 00:54:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:54:19.003247 | orchestrator | 2025-05-06 00:54:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:54:22.056261 | orchestrator | 2025-05-06 00:54:22 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:54:22.057469 | orchestrator | 2025-05-06 00:54:22 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:54:22.065592 | orchestrator | 2025-05-06 00:54:22 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:54:22.068050 | orchestrator | 2025-05-06 00:54:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:54:22.068997 | orchestrator | 2025-05-06 00:54:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:54:25.122477 | orchestrator | 2025-05-06 00:54:25 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:54:25.123104 | orchestrator | 2025-05-06 00:54:25 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:54:25.123606 | orchestrator | 2025-05-06 00:54:25 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:54:25.124755 | orchestrator | 2025-05-06 00:54:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:54:28.183241 | orchestrator | 2025-05-06 00:54:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:54:28.183368 | orchestrator | 2025-05-06 00:54:28 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:54:28.185289 | orchestrator | 2025-05-06 00:54:28 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:54:28.187108 | orchestrator | 2025-05-06 00:54:28 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:54:28.188780 | orchestrator | 2025-05-06 00:54:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:54:31.241822 | orchestrator | 2025-05-06 00:54:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:54:31.242094 | orchestrator | 2025-05-06 00:54:31 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:54:31.242640 | orchestrator | 2025-05-06 00:54:31 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:54:31.244282 | orchestrator | 2025-05-06 00:54:31 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:54:31.245939 | orchestrator | 2025-05-06 00:54:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:54:31.246140 | orchestrator | 2025-05-06 00:54:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:54:34.298083 | orchestrator | 2025-05-06 00:54:34 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:54:34.299455 | orchestrator | 2025-05-06 00:54:34 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:54:34.301604 | orchestrator | 2025-05-06 00:54:34 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:54:34.302746 | orchestrator | 2025-05-06 00:54:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:54:37.341165 | orchestrator | 2025-05-06 00:54:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:54:37.341307 | orchestrator | 2025-05-06 00:54:37 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:54:37.342948 | orchestrator | 2025-05-06 00:54:37 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:54:37.346091 | orchestrator | 2025-05-06 00:54:37 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:54:40.402949 | orchestrator | 2025-05-06 00:54:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:54:40.403072 | orchestrator | 2025-05-06 00:54:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:54:40.403112 | orchestrator | 2025-05-06 00:54:40 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:54:40.406210 | orchestrator | 2025-05-06 00:54:40 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:54:40.407873 | orchestrator | 2025-05-06 00:54:40 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:54:40.411061 | orchestrator | 2025-05-06 00:54:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:54:43.440243 | orchestrator | 2025-05-06 00:54:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:54:43.440406 | orchestrator | 2025-05-06 00:54:43 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:54:43.441502 | orchestrator | 2025-05-06 00:54:43 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:54:43.443189 | orchestrator | 2025-05-06 00:54:43 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:54:43.444869 | orchestrator | 2025-05-06 00:54:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:54:46.488074 | orchestrator | 2025-05-06 00:54:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:54:46.488213 | orchestrator | 2025-05-06 00:54:46 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:54:46.489673 | orchestrator | 2025-05-06 00:54:46 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:54:46.491211 | orchestrator | 2025-05-06 00:54:46 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:54:46.492536 | orchestrator | 2025-05-06 00:54:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:54:46.492902 | orchestrator | 2025-05-06 00:54:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:54:49.537015 | orchestrator | 2025-05-06 00:54:49 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:54:49.537955 | orchestrator | 2025-05-06 00:54:49 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:54:49.540750 | orchestrator | 2025-05-06 00:54:49 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:54:49.542297 | orchestrator | 2025-05-06 00:54:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:54:52.593320 | orchestrator | 2025-05-06 00:54:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:54:52.593463 | orchestrator | 2025-05-06 00:54:52 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:54:52.602153 | orchestrator | 2025-05-06 00:54:52 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:54:52.607324 | orchestrator | 2025-05-06 00:54:52 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:54:52.611259 | orchestrator | 2025-05-06 00:54:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:54:52.611896 | orchestrator | 2025-05-06 00:54:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:54:55.647178 | orchestrator | 2025-05-06 00:54:55 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:54:55.647781 | orchestrator | 2025-05-06 00:54:55 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:54:55.649269 | orchestrator | 2025-05-06 00:54:55 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:54:55.650843 | orchestrator | 2025-05-06 00:54:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:54:58.698924 | orchestrator | 2025-05-06 00:54:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:54:58.699041 | orchestrator | 2025-05-06 00:54:58 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:54:58.700868 | orchestrator | 2025-05-06 00:54:58 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:54:58.703339 | orchestrator | 2025-05-06 00:54:58 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:54:58.705199 | orchestrator | 2025-05-06 00:54:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:55:01.758383 | orchestrator | 2025-05-06 00:54:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:55:01.758593 | orchestrator | 2025-05-06 00:55:01 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:55:01.759311 | orchestrator | 2025-05-06 00:55:01 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:55:01.761441 | orchestrator | 2025-05-06 00:55:01 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:55:01.763260 | orchestrator | 2025-05-06 00:55:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:55:04.818894 | orchestrator | 2025-05-06 00:55:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:55:04.819062 | orchestrator | 2025-05-06 00:55:04 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:55:04.820997 | orchestrator | 2025-05-06 00:55:04 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:55:04.823202 | orchestrator | 2025-05-06 00:55:04 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:55:04.824979 | orchestrator | 2025-05-06 00:55:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:55:07.878282 | orchestrator | 2025-05-06 00:55:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:55:07.878423 | orchestrator | 2025-05-06 00:55:07 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:55:07.882741 | orchestrator | 2025-05-06 00:55:07 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:55:07.884871 | orchestrator | 2025-05-06 00:55:07 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:55:07.884910 | orchestrator | 2025-05-06 00:55:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:55:10.939280 | orchestrator | 2025-05-06 00:55:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:55:10.939524 | orchestrator | 2025-05-06 00:55:10 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:55:10.940483 | orchestrator | 2025-05-06 00:55:10 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:55:10.940526 | orchestrator | 2025-05-06 00:55:10 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:55:10.941294 | orchestrator | 2025-05-06 00:55:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:55:13.995950 | orchestrator | 2025-05-06 00:55:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:55:13.996103 | orchestrator | 2025-05-06 00:55:13 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:55:13.996377 | orchestrator | 2025-05-06 00:55:13 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:55:13.997271 | orchestrator | 2025-05-06 00:55:13 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:55:13.997794 | orchestrator | 2025-05-06 00:55:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:55:17.057035 | orchestrator | 2025-05-06 00:55:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:55:17.057174 | orchestrator | 2025-05-06 00:55:17 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:55:17.057491 | orchestrator | 2025-05-06 00:55:17 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:55:17.058662 | orchestrator | 2025-05-06 00:55:17 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:55:17.062912 | orchestrator | 2025-05-06 00:55:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:55:20.100696 | orchestrator | 2025-05-06 00:55:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:55:20.100882 | orchestrator | 2025-05-06 00:55:20 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:55:20.102286 | orchestrator | 2025-05-06 00:55:20 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:55:20.103707 | orchestrator | 2025-05-06 00:55:20 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:55:20.105468 | orchestrator | 2025-05-06 00:55:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:55:23.149469 | orchestrator | 2025-05-06 00:55:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:55:23.149607 | orchestrator | 2025-05-06 00:55:23 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:55:23.150874 | orchestrator | 2025-05-06 00:55:23 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:55:23.152537 | orchestrator | 2025-05-06 00:55:23 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:55:23.153782 | orchestrator | 2025-05-06 00:55:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:55:23.154100 | orchestrator | 2025-05-06 00:55:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:55:26.209220 | orchestrator | 2025-05-06 00:55:26 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:55:26.210364 | orchestrator | 2025-05-06 00:55:26 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:55:26.211966 | orchestrator | 2025-05-06 00:55:26 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:55:26.213365 | orchestrator | 2025-05-06 00:55:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:55:29.262324 | orchestrator | 2025-05-06 00:55:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:55:29.262464 | orchestrator | 2025-05-06 00:55:29 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:55:29.262876 | orchestrator | 2025-05-06 00:55:29 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:55:29.264577 | orchestrator | 2025-05-06 00:55:29 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:55:29.268056 | orchestrator | 2025-05-06 00:55:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:55:32.321673 | orchestrator | 2025-05-06 00:55:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:55:32.321889 | orchestrator | 2025-05-06 00:55:32 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:55:32.324038 | orchestrator | 2025-05-06 00:55:32 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:55:32.324832 | orchestrator | 2025-05-06 00:55:32 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:55:32.326497 | orchestrator | 2025-05-06 00:55:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:55:35.391224 | orchestrator | 2025-05-06 00:55:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:55:35.391346 | orchestrator | 2025-05-06 00:55:35 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:55:35.391887 | orchestrator | 2025-05-06 00:55:35 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:55:35.393643 | orchestrator | 2025-05-06 00:55:35 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:55:35.396419 | orchestrator | 2025-05-06 00:55:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:55:38.450534 | orchestrator | 2025-05-06 00:55:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:55:38.450678 | orchestrator | 2025-05-06 00:55:38 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:55:38.451971 | orchestrator | 2025-05-06 00:55:38 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:55:38.454341 | orchestrator | 2025-05-06 00:55:38 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:55:38.455962 | orchestrator | 2025-05-06 00:55:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:55:41.515523 | orchestrator | 2025-05-06 00:55:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:55:41.515670 | orchestrator | 2025-05-06 00:55:41 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:55:41.520454 | orchestrator | 2025-05-06 00:55:41 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:55:41.523448 | orchestrator | 2025-05-06 00:55:41 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:55:44.575098 | orchestrator | 2025-05-06 00:55:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:55:44.575227 | orchestrator | 2025-05-06 00:55:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:55:44.575265 | orchestrator | 2025-05-06 00:55:44 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:55:44.575883 | orchestrator | 2025-05-06 00:55:44 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:55:44.575930 | orchestrator | 2025-05-06 00:55:44 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:55:44.575956 | orchestrator | 2025-05-06 00:55:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:55:47.629972 | orchestrator | 2025-05-06 00:55:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:55:47.630155 | orchestrator | 2025-05-06 00:55:47 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:55:47.631079 | orchestrator | 2025-05-06 00:55:47 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:55:47.632366 | orchestrator | 2025-05-06 00:55:47 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:55:47.634840 | orchestrator | 2025-05-06 00:55:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:55:50.731110 | orchestrator | 2025-05-06 00:55:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:55:50.731284 | orchestrator | 2025-05-06 00:55:50 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:55:50.731726 | orchestrator | 2025-05-06 00:55:50 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:55:50.731802 | orchestrator | 2025-05-06 00:55:50 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:55:50.731841 | orchestrator | 2025-05-06 00:55:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:55:53.782718 | orchestrator | 2025-05-06 00:55:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:55:53.782887 | orchestrator | 2025-05-06 00:55:53 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:55:53.784047 | orchestrator | 2025-05-06 00:55:53 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:55:53.785735 | orchestrator | 2025-05-06 00:55:53 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:55:53.787035 | orchestrator | 2025-05-06 00:55:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:55:56.837804 | orchestrator | 2025-05-06 00:55:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:55:56.837966 | orchestrator | 2025-05-06 00:55:56 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:55:56.839087 | orchestrator | 2025-05-06 00:55:56 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:55:56.841036 | orchestrator | 2025-05-06 00:55:56 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:55:56.842863 | orchestrator | 2025-05-06 00:55:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:55:59.892066 | orchestrator | 2025-05-06 00:55:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:55:59.892207 | orchestrator | 2025-05-06 00:55:59 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:55:59.893306 | orchestrator | 2025-05-06 00:55:59 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:55:59.894981 | orchestrator | 2025-05-06 00:55:59 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:55:59.897214 | orchestrator | 2025-05-06 00:55:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:55:59.897441 | orchestrator | 2025-05-06 00:55:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:56:02.952424 | orchestrator | 2025-05-06 00:56:02 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:56:02.953464 | orchestrator | 2025-05-06 00:56:02 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state STARTED 2025-05-06 00:56:02.954350 | orchestrator | 2025-05-06 00:56:02 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:56:02.955786 | orchestrator | 2025-05-06 00:56:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:56:02.955941 | orchestrator | 2025-05-06 00:56:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:56:06.025444 | orchestrator | 2025-05-06 00:56:06 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:56:06.027500 | orchestrator | 2025-05-06 00:56:06 | INFO  | Task e4226031-c04b-46ea-84b6-bf9a68d478d2 is in state SUCCESS 2025-05-06 00:56:06.029546 | orchestrator | 2025-05-06 00:56:06.029602 | orchestrator | None 2025-05-06 00:56:06.029619 | orchestrator | 2025-05-06 00:56:06.029634 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 00:56:06.029651 | orchestrator | 2025-05-06 00:56:06.029666 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 00:56:06.029681 | orchestrator | Tuesday 06 May 2025 00:53:55 +0000 (0:00:00.263) 0:00:00.263 *********** 2025-05-06 00:56:06.029696 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:56:06.029712 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:56:06.029756 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:56:06.029773 | orchestrator | 2025-05-06 00:56:06.029788 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 00:56:06.029802 | orchestrator | Tuesday 06 May 2025 00:53:55 +0000 (0:00:00.316) 0:00:00.579 *********** 2025-05-06 00:56:06.029818 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-06 00:56:06.029854 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-06 00:56:06.029869 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-06 00:56:06.029884 | orchestrator | 2025-05-06 00:56:06.029898 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-06 00:56:06.029912 | orchestrator | 2025-05-06 00:56:06.029927 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-06 00:56:06.029941 | orchestrator | Tuesday 06 May 2025 00:53:55 +0000 (0:00:00.210) 0:00:00.790 *********** 2025-05-06 00:56:06.029956 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:56:06.029971 | orchestrator | 2025-05-06 00:56:06.029986 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-06 00:56:06.030000 | orchestrator | Tuesday 06 May 2025 00:53:56 +0000 (0:00:00.442) 0:00:01.232 *********** 2025-05-06 00:56:06.030061 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-06 00:56:06.030080 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-06 00:56:06.030095 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-06 00:56:06.030119 | orchestrator | 2025-05-06 00:56:06.030143 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-05-06 00:56:06.030167 | orchestrator | Tuesday 06 May 2025 00:53:56 +0000 (0:00:00.641) 0:00:01.873 *********** 2025-05-06 00:56:06.030196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-06 00:56:06.030228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-06 00:56:06.030264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-06 00:56:06.030297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-06 00:56:06.030316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-06 00:56:06.030333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-06 00:56:06.030350 | orchestrator | 2025-05-06 00:56:06.030366 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-06 00:56:06.030383 | orchestrator | Tuesday 06 May 2025 00:53:58 +0000 (0:00:01.452) 0:00:03.326 *********** 2025-05-06 00:56:06.030399 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:56:06.030415 | orchestrator | 2025-05-06 00:56:06.030430 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-05-06 00:56:06.030446 | orchestrator | Tuesday 06 May 2025 00:53:58 +0000 (0:00:00.702) 0:00:04.029 *********** 2025-05-06 00:56:06.030477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-06 00:56:06.030495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-06 00:56:06.030510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-06 00:56:06.030525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-06 00:56:06.030547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-06 00:56:06.030588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-06 00:56:06.030605 | orchestrator | 2025-05-06 00:56:06.030619 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-05-06 00:56:06.030634 | orchestrator | Tuesday 06 May 2025 00:54:02 +0000 (0:00:03.387) 0:00:07.417 *********** 2025-05-06 00:56:06.030649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-06 00:56:06.030664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-06 00:56:06.030686 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:56:06.030709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-06 00:56:06.030753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-06 00:56:06.030769 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:56:06.030784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-06 00:56:06.030799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-06 00:56:06.030827 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:56:06.030841 | orchestrator | 2025-05-06 00:56:06.030855 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-05-06 00:56:06.030877 | orchestrator | Tuesday 06 May 2025 00:54:03 +0000 (0:00:00.907) 0:00:08.324 *********** 2025-05-06 00:56:06.030899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-06 00:56:06.030915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-06 00:56:06.030930 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:56:06.030944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-06 00:56:06.030959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-06 00:56:06.030981 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:56:06.031001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-06 00:56:06.031016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-06 00:56:06.031031 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:56:06.031045 | orchestrator | 2025-05-06 00:56:06.031059 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-05-06 00:56:06.031073 | orchestrator | Tuesday 06 May 2025 00:54:04 +0000 (0:00:00.824) 0:00:09.149 *********** 2025-05-06 00:56:06.031087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-06 00:56:06.031102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-06 00:56:06.031123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-06 00:56:06.031145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-06 00:56:06.031161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-06 00:56:06.031176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-06 00:56:06.031197 | orchestrator | 2025-05-06 00:56:06.031211 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-05-06 00:56:06.031225 | orchestrator | Tuesday 06 May 2025 00:54:06 +0000 (0:00:02.172) 0:00:11.321 *********** 2025-05-06 00:56:06.031239 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:56:06.031253 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:56:06.031268 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:56:06.031281 | orchestrator | 2025-05-06 00:56:06.031295 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-05-06 00:56:06.031309 | orchestrator | Tuesday 06 May 2025 00:54:09 +0000 (0:00:03.105) 0:00:14.427 *********** 2025-05-06 00:56:06.031323 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:56:06.031336 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:56:06.031350 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:56:06.031364 | orchestrator | 2025-05-06 00:56:06.031378 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-05-06 00:56:06.031392 | orchestrator | Tuesday 06 May 2025 00:54:11 +0000 (0:00:01.841) 0:00:16.269 *********** 2025-05-06 00:56:06.031413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-06 00:56:06.031429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-06 00:56:06.031444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-06 00:56:06.031464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-06 00:56:06.031486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-06 00:56:06.031502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-06 00:56:06.031516 | orchestrator | 2025-05-06 00:56:06.031531 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-06 00:56:06.031545 | orchestrator | Tuesday 06 May 2025 00:54:13 +0000 (0:00:02.172) 0:00:18.442 *********** 2025-05-06 00:56:06.031558 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:56:06.031572 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:56:06.031586 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:56:06.031600 | orchestrator | 2025-05-06 00:56:06.031614 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-06 00:56:06.031628 | orchestrator | Tuesday 06 May 2025 00:54:13 +0000 (0:00:00.242) 0:00:18.684 *********** 2025-05-06 00:56:06.031642 | orchestrator | 2025-05-06 00:56:06.031656 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-06 00:56:06.031675 | orchestrator | Tuesday 06 May 2025 00:54:13 +0000 (0:00:00.159) 0:00:18.844 *********** 2025-05-06 00:56:06.031690 | orchestrator | 2025-05-06 00:56:06.031703 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-06 00:56:06.031717 | orchestrator | Tuesday 06 May 2025 00:54:13 +0000 (0:00:00.057) 0:00:18.901 *********** 2025-05-06 00:56:06.031777 | orchestrator | 2025-05-06 00:56:06.031793 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-05-06 00:56:06.031808 | orchestrator | Tuesday 06 May 2025 00:54:13 +0000 (0:00:00.056) 0:00:18.957 *********** 2025-05-06 00:56:06.031821 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:56:06.031835 | orchestrator | 2025-05-06 00:56:06.031849 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-05-06 00:56:06.031863 | orchestrator | Tuesday 06 May 2025 00:54:14 +0000 (0:00:00.170) 0:00:19.128 *********** 2025-05-06 00:56:06.031877 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:56:06.031891 | orchestrator | 2025-05-06 00:56:06.031905 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-05-06 00:56:06.031919 | orchestrator | Tuesday 06 May 2025 00:54:14 +0000 (0:00:00.342) 0:00:19.471 *********** 2025-05-06 00:56:06.031948 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:56:06.031962 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:56:06.031976 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:56:06.031989 | orchestrator | 2025-05-06 00:56:06.032003 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-05-06 00:56:06.032017 | orchestrator | Tuesday 06 May 2025 00:54:43 +0000 (0:00:29.120) 0:00:48.592 *********** 2025-05-06 00:56:06.032030 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:56:06.032044 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:56:06.032058 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:56:06.032072 | orchestrator | 2025-05-06 00:56:06.032086 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-06 00:56:06.032100 | orchestrator | Tuesday 06 May 2025 00:55:50 +0000 (0:01:06.731) 0:01:55.323 *********** 2025-05-06 00:56:06.032114 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:56:06.032128 | orchestrator | 2025-05-06 00:56:06.032142 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-05-06 00:56:06.032155 | orchestrator | Tuesday 06 May 2025 00:55:51 +0000 (0:00:00.920) 0:01:56.244 *********** 2025-05-06 00:56:06.032169 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:56:06.032183 | orchestrator | 2025-05-06 00:56:06.032197 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-05-06 00:56:06.032211 | orchestrator | Tuesday 06 May 2025 00:55:53 +0000 (0:00:02.702) 0:01:58.946 *********** 2025-05-06 00:56:06.032224 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:56:06.032238 | orchestrator | 2025-05-06 00:56:06.032252 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-05-06 00:56:06.032270 | orchestrator | Tuesday 06 May 2025 00:55:56 +0000 (0:00:02.641) 0:02:01.587 *********** 2025-05-06 00:56:06.032283 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:56:06.032296 | orchestrator | 2025-05-06 00:56:06.032308 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-05-06 00:56:06.032320 | orchestrator | Tuesday 06 May 2025 00:55:59 +0000 (0:00:03.014) 0:02:04.602 *********** 2025-05-06 00:56:06.032333 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:56:06.032345 | orchestrator | 2025-05-06 00:56:06.032363 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:56:06.032832 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-06 00:56:06.032860 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-06 00:56:06.032886 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-06 00:56:06.032897 | orchestrator | 2025-05-06 00:56:06.032907 | orchestrator | 2025-05-06 00:56:06.032918 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 00:56:06.032929 | orchestrator | Tuesday 06 May 2025 00:56:02 +0000 (0:00:03.010) 0:02:07.612 *********** 2025-05-06 00:56:06.032940 | orchestrator | =============================================================================== 2025-05-06 00:56:06.032950 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 66.73s 2025-05-06 00:56:06.032961 | orchestrator | opensearch : Restart opensearch container ------------------------------ 29.12s 2025-05-06 00:56:06.032971 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.39s 2025-05-06 00:56:06.032982 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.11s 2025-05-06 00:56:06.032993 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.01s 2025-05-06 00:56:06.033003 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.01s 2025-05-06 00:56:06.033014 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.70s 2025-05-06 00:56:06.033024 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.64s 2025-05-06 00:56:06.033035 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.17s 2025-05-06 00:56:06.033045 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.17s 2025-05-06 00:56:06.033120 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.84s 2025-05-06 00:56:06.033135 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.45s 2025-05-06 00:56:06.033145 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.92s 2025-05-06 00:56:06.033156 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.91s 2025-05-06 00:56:06.033167 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.82s 2025-05-06 00:56:06.033177 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.70s 2025-05-06 00:56:06.033188 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.64s 2025-05-06 00:56:06.033198 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.44s 2025-05-06 00:56:06.033209 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.34s 2025-05-06 00:56:06.033219 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-05-06 00:56:06.033235 | orchestrator | 2025-05-06 00:56:06 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:56:09.080287 | orchestrator | 2025-05-06 00:56:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:56:09.080416 | orchestrator | 2025-05-06 00:56:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:56:09.080456 | orchestrator | 2025-05-06 00:56:09 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:56:09.082766 | orchestrator | 2025-05-06 00:56:09 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:56:09.084160 | orchestrator | 2025-05-06 00:56:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:56:12.132965 | orchestrator | 2025-05-06 00:56:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:56:12.133105 | orchestrator | 2025-05-06 00:56:12 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:56:12.134192 | orchestrator | 2025-05-06 00:56:12 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:56:12.135435 | orchestrator | 2025-05-06 00:56:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:56:15.182791 | orchestrator | 2025-05-06 00:56:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:56:15.182961 | orchestrator | 2025-05-06 00:56:15 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:56:15.184980 | orchestrator | 2025-05-06 00:56:15 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:56:15.187191 | orchestrator | 2025-05-06 00:56:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:56:18.248950 | orchestrator | 2025-05-06 00:56:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:56:18.249094 | orchestrator | 2025-05-06 00:56:18 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:56:18.250626 | orchestrator | 2025-05-06 00:56:18 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:56:18.252290 | orchestrator | 2025-05-06 00:56:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:56:21.296600 | orchestrator | 2025-05-06 00:56:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:56:21.296861 | orchestrator | 2025-05-06 00:56:21 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:56:21.297310 | orchestrator | 2025-05-06 00:56:21 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:56:21.298166 | orchestrator | 2025-05-06 00:56:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:56:24.345277 | orchestrator | 2025-05-06 00:56:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:56:24.345388 | orchestrator | 2025-05-06 00:56:24 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:56:24.346533 | orchestrator | 2025-05-06 00:56:24 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:56:24.348977 | orchestrator | 2025-05-06 00:56:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:56:27.395927 | orchestrator | 2025-05-06 00:56:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:56:27.396078 | orchestrator | 2025-05-06 00:56:27 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:56:27.397722 | orchestrator | 2025-05-06 00:56:27 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:56:27.399356 | orchestrator | 2025-05-06 00:56:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:56:30.450167 | orchestrator | 2025-05-06 00:56:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:56:30.450341 | orchestrator | 2025-05-06 00:56:30 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:56:30.451260 | orchestrator | 2025-05-06 00:56:30 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:56:30.452532 | orchestrator | 2025-05-06 00:56:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:56:33.499792 | orchestrator | 2025-05-06 00:56:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:56:33.499942 | orchestrator | 2025-05-06 00:56:33 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:56:33.501494 | orchestrator | 2025-05-06 00:56:33 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:56:33.503077 | orchestrator | 2025-05-06 00:56:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:56:36.553826 | orchestrator | 2025-05-06 00:56:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:56:36.553974 | orchestrator | 2025-05-06 00:56:36 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:56:36.555835 | orchestrator | 2025-05-06 00:56:36 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:56:36.557582 | orchestrator | 2025-05-06 00:56:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:56:39.605378 | orchestrator | 2025-05-06 00:56:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:56:39.605530 | orchestrator | 2025-05-06 00:56:39 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:56:39.606344 | orchestrator | 2025-05-06 00:56:39 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:56:39.607959 | orchestrator | 2025-05-06 00:56:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:56:42.660539 | orchestrator | 2025-05-06 00:56:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:56:42.660637 | orchestrator | 2025-05-06 00:56:42 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:56:42.662091 | orchestrator | 2025-05-06 00:56:42 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:56:42.663481 | orchestrator | 2025-05-06 00:56:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:56:42.663730 | orchestrator | 2025-05-06 00:56:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:56:45.714766 | orchestrator | 2025-05-06 00:56:45 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:56:45.716364 | orchestrator | 2025-05-06 00:56:45 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:56:45.717982 | orchestrator | 2025-05-06 00:56:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:56:48.764525 | orchestrator | 2025-05-06 00:56:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:56:48.764736 | orchestrator | 2025-05-06 00:56:48 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:56:48.769619 | orchestrator | 2025-05-06 00:56:48 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:56:48.771238 | orchestrator | 2025-05-06 00:56:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:56:48.771912 | orchestrator | 2025-05-06 00:56:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:56:51.829802 | orchestrator | 2025-05-06 00:56:51 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:56:51.830109 | orchestrator | 2025-05-06 00:56:51 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:56:51.830857 | orchestrator | 2025-05-06 00:56:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:56:54.887921 | orchestrator | 2025-05-06 00:56:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:56:54.888062 | orchestrator | 2025-05-06 00:56:54 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:56:54.889249 | orchestrator | 2025-05-06 00:56:54 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:56:54.891342 | orchestrator | 2025-05-06 00:56:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:56:57.941106 | orchestrator | 2025-05-06 00:56:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:56:57.941233 | orchestrator | 2025-05-06 00:56:57 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:56:57.941948 | orchestrator | 2025-05-06 00:56:57 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:56:57.943221 | orchestrator | 2025-05-06 00:56:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:57:00.991914 | orchestrator | 2025-05-06 00:56:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:57:00.992057 | orchestrator | 2025-05-06 00:57:00 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:57:00.993424 | orchestrator | 2025-05-06 00:57:00 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state STARTED 2025-05-06 00:57:00.995190 | orchestrator | 2025-05-06 00:57:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:57:00.995483 | orchestrator | 2025-05-06 00:57:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:57:04.046305 | orchestrator | 2025-05-06 00:57:04 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:57:04.058868 | orchestrator | 2025-05-06 00:57:04.058927 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-06 00:57:04.058943 | orchestrator | 2025-05-06 00:57:04.058956 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-06 00:57:04.058969 | orchestrator | 2025-05-06 00:57:04.058982 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-06 00:57:04.058995 | orchestrator | Tuesday 06 May 2025 00:44:31 +0000 (0:00:01.616) 0:00:01.616 *********** 2025-05-06 00:57:04.059009 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.059023 | orchestrator | 2025-05-06 00:57:04.059036 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-06 00:57:04.059066 | orchestrator | Tuesday 06 May 2025 00:44:32 +0000 (0:00:01.195) 0:00:02.811 *********** 2025-05-06 00:57:04.059080 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-06 00:57:04.059094 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-06 00:57:04.059107 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-06 00:57:04.059120 | orchestrator | 2025-05-06 00:57:04.059133 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-06 00:57:04.059146 | orchestrator | Tuesday 06 May 2025 00:44:33 +0000 (0:00:00.536) 0:00:03.348 *********** 2025-05-06 00:57:04.059160 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.059174 | orchestrator | 2025-05-06 00:57:04.059187 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-06 00:57:04.059294 | orchestrator | Tuesday 06 May 2025 00:44:34 +0000 (0:00:01.117) 0:00:04.466 *********** 2025-05-06 00:57:04.059307 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.059320 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.059333 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.059346 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.059358 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.059371 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.059384 | orchestrator | 2025-05-06 00:57:04.059397 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-06 00:57:04.059409 | orchestrator | Tuesday 06 May 2025 00:44:35 +0000 (0:00:01.477) 0:00:05.943 *********** 2025-05-06 00:57:04.059422 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.059434 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.059506 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.059522 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.059534 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.059547 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.059559 | orchestrator | 2025-05-06 00:57:04.059572 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-06 00:57:04.059584 | orchestrator | Tuesday 06 May 2025 00:44:36 +0000 (0:00:00.920) 0:00:06.863 *********** 2025-05-06 00:57:04.059597 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.059610 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.059622 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.059635 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.059666 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.059679 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.059692 | orchestrator | 2025-05-06 00:57:04.059704 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-06 00:57:04.059717 | orchestrator | Tuesday 06 May 2025 00:44:38 +0000 (0:00:01.629) 0:00:08.492 *********** 2025-05-06 00:57:04.059729 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.059749 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.059761 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.059774 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.059786 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.059798 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.059810 | orchestrator | 2025-05-06 00:57:04.059823 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-06 00:57:04.059835 | orchestrator | Tuesday 06 May 2025 00:44:39 +0000 (0:00:01.262) 0:00:09.756 *********** 2025-05-06 00:57:04.059847 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.059860 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.059872 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.059884 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.059896 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.059909 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.059921 | orchestrator | 2025-05-06 00:57:04.059933 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-06 00:57:04.059945 | orchestrator | Tuesday 06 May 2025 00:44:40 +0000 (0:00:00.683) 0:00:10.440 *********** 2025-05-06 00:57:04.059958 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.060060 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.060076 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.060088 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.060101 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.060113 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.060154 | orchestrator | 2025-05-06 00:57:04.060169 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-06 00:57:04.060182 | orchestrator | Tuesday 06 May 2025 00:44:41 +0000 (0:00:00.914) 0:00:11.354 *********** 2025-05-06 00:57:04.060195 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.060209 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.060221 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.060234 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.060246 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.060259 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.060271 | orchestrator | 2025-05-06 00:57:04.060283 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-06 00:57:04.060296 | orchestrator | Tuesday 06 May 2025 00:44:42 +0000 (0:00:00.940) 0:00:12.295 *********** 2025-05-06 00:57:04.060308 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.060321 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.060333 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.060346 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.060358 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.060370 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.060383 | orchestrator | 2025-05-06 00:57:04.060407 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-06 00:57:04.060429 | orchestrator | Tuesday 06 May 2025 00:44:43 +0000 (0:00:01.007) 0:00:13.302 *********** 2025-05-06 00:57:04.060442 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-06 00:57:04.060455 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-06 00:57:04.060495 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-06 00:57:04.060508 | orchestrator | 2025-05-06 00:57:04.060521 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-06 00:57:04.060533 | orchestrator | Tuesday 06 May 2025 00:44:43 +0000 (0:00:00.765) 0:00:14.067 *********** 2025-05-06 00:57:04.060545 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.060558 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.060570 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.060583 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.060595 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.060607 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.060619 | orchestrator | 2025-05-06 00:57:04.060632 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-06 00:57:04.060644 | orchestrator | Tuesday 06 May 2025 00:44:45 +0000 (0:00:01.634) 0:00:15.702 *********** 2025-05-06 00:57:04.060681 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-06 00:57:04.060695 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-06 00:57:04.060707 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-06 00:57:04.060719 | orchestrator | 2025-05-06 00:57:04.060732 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-06 00:57:04.060744 | orchestrator | Tuesday 06 May 2025 00:44:48 +0000 (0:00:03.136) 0:00:18.839 *********** 2025-05-06 00:57:04.060756 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-06 00:57:04.060769 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-06 00:57:04.060781 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-06 00:57:04.060794 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.060806 | orchestrator | 2025-05-06 00:57:04.060819 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-06 00:57:04.060838 | orchestrator | Tuesday 06 May 2025 00:44:49 +0000 (0:00:00.489) 0:00:19.328 *********** 2025-05-06 00:57:04.060852 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-06 00:57:04.060867 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-06 00:57:04.060879 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-06 00:57:04.060892 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.060905 | orchestrator | 2025-05-06 00:57:04.060917 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-06 00:57:04.060930 | orchestrator | Tuesday 06 May 2025 00:44:49 +0000 (0:00:00.633) 0:00:19.962 *********** 2025-05-06 00:57:04.060944 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-06 00:57:04.060966 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-06 00:57:04.060979 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-06 00:57:04.060992 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.061004 | orchestrator | 2025-05-06 00:57:04.061017 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-06 00:57:04.061108 | orchestrator | Tuesday 06 May 2025 00:44:49 +0000 (0:00:00.150) 0:00:20.112 *********** 2025-05-06 00:57:04.061129 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-06 00:44:46.278852', 'end': '2025-05-06 00:44:46.562376', 'delta': '0:00:00.283524', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-06 00:57:04.061146 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-06 00:44:47.156259', 'end': '2025-05-06 00:44:47.461391', 'delta': '0:00:00.305132', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-06 00:57:04.061159 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-06 00:44:48.095230', 'end': '2025-05-06 00:44:48.397427', 'delta': '0:00:00.302197', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-06 00:57:04.061173 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.061185 | orchestrator | 2025-05-06 00:57:04.061198 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-06 00:57:04.061210 | orchestrator | Tuesday 06 May 2025 00:44:50 +0000 (0:00:00.176) 0:00:20.289 *********** 2025-05-06 00:57:04.061223 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.061236 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.061248 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.061260 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.061273 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.061285 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.061305 | orchestrator | 2025-05-06 00:57:04.061317 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-06 00:57:04.061330 | orchestrator | Tuesday 06 May 2025 00:44:51 +0000 (0:00:01.123) 0:00:21.412 *********** 2025-05-06 00:57:04.061343 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.061355 | orchestrator | 2025-05-06 00:57:04.061368 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-06 00:57:04.061380 | orchestrator | Tuesday 06 May 2025 00:44:51 +0000 (0:00:00.740) 0:00:22.153 *********** 2025-05-06 00:57:04.061393 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.061405 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.061418 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.061430 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.061442 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.061460 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.061472 | orchestrator | 2025-05-06 00:57:04.061485 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-06 00:57:04.061497 | orchestrator | Tuesday 06 May 2025 00:44:52 +0000 (0:00:00.921) 0:00:23.075 *********** 2025-05-06 00:57:04.061510 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.061522 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.061534 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.061547 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.061559 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.061571 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.061583 | orchestrator | 2025-05-06 00:57:04.061596 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-06 00:57:04.061608 | orchestrator | Tuesday 06 May 2025 00:44:53 +0000 (0:00:01.088) 0:00:24.164 *********** 2025-05-06 00:57:04.061621 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.061633 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.061645 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.061686 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.061698 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.061711 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.061723 | orchestrator | 2025-05-06 00:57:04.061735 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-06 00:57:04.061748 | orchestrator | Tuesday 06 May 2025 00:44:54 +0000 (0:00:00.889) 0:00:25.053 *********** 2025-05-06 00:57:04.061766 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.061779 | orchestrator | 2025-05-06 00:57:04.061792 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-06 00:57:04.061804 | orchestrator | Tuesday 06 May 2025 00:44:55 +0000 (0:00:00.212) 0:00:25.266 *********** 2025-05-06 00:57:04.061816 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.061853 | orchestrator | 2025-05-06 00:57:04.061953 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-06 00:57:04.061968 | orchestrator | Tuesday 06 May 2025 00:44:56 +0000 (0:00:01.146) 0:00:26.412 *********** 2025-05-06 00:57:04.061980 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.061993 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.062005 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.062066 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.062081 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.062094 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.062106 | orchestrator | 2025-05-06 00:57:04.062163 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-06 00:57:04.062177 | orchestrator | Tuesday 06 May 2025 00:44:56 +0000 (0:00:00.563) 0:00:26.975 *********** 2025-05-06 00:57:04.062190 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.062202 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.062215 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.062227 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.062248 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.062261 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.062273 | orchestrator | 2025-05-06 00:57:04.062286 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-06 00:57:04.062298 | orchestrator | Tuesday 06 May 2025 00:44:57 +0000 (0:00:00.825) 0:00:27.801 *********** 2025-05-06 00:57:04.062311 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.062324 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.062336 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.062349 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.062361 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.062373 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.062386 | orchestrator | 2025-05-06 00:57:04.062398 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-06 00:57:04.062411 | orchestrator | Tuesday 06 May 2025 00:44:58 +0000 (0:00:00.937) 0:00:28.738 *********** 2025-05-06 00:57:04.062423 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.062436 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.062448 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.062460 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.062473 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.062485 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.062498 | orchestrator | 2025-05-06 00:57:04.062510 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-06 00:57:04.062523 | orchestrator | Tuesday 06 May 2025 00:44:59 +0000 (0:00:00.972) 0:00:29.710 *********** 2025-05-06 00:57:04.062535 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.062547 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.062559 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.062572 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.062584 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.062596 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.062609 | orchestrator | 2025-05-06 00:57:04.062621 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-06 00:57:04.062728 | orchestrator | Tuesday 06 May 2025 00:45:00 +0000 (0:00:00.645) 0:00:30.356 *********** 2025-05-06 00:57:04.062743 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.062755 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.062768 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.062781 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.062793 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.062806 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.062819 | orchestrator | 2025-05-06 00:57:04.062836 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-06 00:57:04.062849 | orchestrator | Tuesday 06 May 2025 00:45:00 +0000 (0:00:00.708) 0:00:31.065 *********** 2025-05-06 00:57:04.062862 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.062882 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.062896 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.062908 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.062920 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.062933 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.062945 | orchestrator | 2025-05-06 00:57:04.062958 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-06 00:57:04.062970 | orchestrator | Tuesday 06 May 2025 00:45:01 +0000 (0:00:00.598) 0:00:31.664 *********** 2025-05-06 00:57:04.062984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_971680de-ee79-4aff-976e-b13f7aba5834', 'scsi-SQEMU_QEMU_HARDDISK_971680de-ee79-4aff-976e-b13f7aba5834'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_971680de-ee79-4aff-976e-b13f7aba5834-part1', 'scsi-SQEMU_QEMU_HARDDISK_971680de-ee79-4aff-976e-b13f7aba5834-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_971680de-ee79-4aff-976e-b13f7aba5834-part14', 'scsi-SQEMU_QEMU_HARDDISK_971680de-ee79-4aff-976e-b13f7aba5834-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_971680de-ee79-4aff-976e-b13f7aba5834-part15', 'scsi-SQEMU_QEMU_HARDDISK_971680de-ee79-4aff-976e-b13f7aba5834-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_971680de-ee79-4aff-976e-b13f7aba5834-part16', 'scsi-SQEMU_QEMU_HARDDISK_971680de-ee79-4aff-976e-b13f7aba5834-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.063141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7066bed1-b6f5-4fc6-91d4-16dfe41e1882', 'scsi-SQEMU_QEMU_HARDDISK_7066bed1-b6f5-4fc6-91d4-16dfe41e1882'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.063155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_db071690-0f8e-4535-a70c-dc0b8d604c8e', 'scsi-SQEMU_QEMU_HARDDISK_db071690-0f8e-4535-a70c-dc0b8d604c8e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.063168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e73239c-12d8-4b54-bea1-88c93f0679a4', 'scsi-SQEMU_QEMU_HARDDISK_1e73239c-12d8-4b54-bea1-88c93f0679a4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.063182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-06-00-02-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.063195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063255 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.063268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a4fdaae-8037-4dd2-82a3-3a1a9f1ae042', 'scsi-SQEMU_QEMU_HARDDISK_8a4fdaae-8037-4dd2-82a3-3a1a9f1ae042'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a4fdaae-8037-4dd2-82a3-3a1a9f1ae042-part1', 'scsi-SQEMU_QEMU_HARDDISK_8a4fdaae-8037-4dd2-82a3-3a1a9f1ae042-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a4fdaae-8037-4dd2-82a3-3a1a9f1ae042-part14', 'scsi-SQEMU_QEMU_HARDDISK_8a4fdaae-8037-4dd2-82a3-3a1a9f1ae042-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a4fdaae-8037-4dd2-82a3-3a1a9f1ae042-part15', 'scsi-SQEMU_QEMU_HARDDISK_8a4fdaae-8037-4dd2-82a3-3a1a9f1ae042-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8a4fdaae-8037-4dd2-82a3-3a1a9f1ae042-part16', 'scsi-SQEMU_QEMU_HARDDISK_8a4fdaae-8037-4dd2-82a3-3a1a9f1ae042-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.063369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c7d9a9a-015d-4c6e-aa25-f0276745bfc1', 'scsi-SQEMU_QEMU_HARDDISK_1c7d9a9a-015d-4c6e-aa25-f0276745bfc1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.063382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ae31ae6-cfcf-47bb-94a3-29249ee0671c', 'scsi-SQEMU_QEMU_HARDDISK_4ae31ae6-cfcf-47bb-94a3-29249ee0671c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.063408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_11dd9f49-985b-4711-8afc-7de7cde1776f', 'scsi-SQEMU_QEMU_HARDDISK_11dd9f49-985b-4711-8afc-7de7cde1776f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.063563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-06-00-02-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.063616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', '2025-05-06 00:57:04 | INFO  | Task 76aa1d4b-2d62-4ee8-83d4-9a547537a3f7 is in state SUCCESS 2025-05-06 00:57:04.063633 | orchestrator | sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063676 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.063694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e738e251-d306-48ee-8a06-82586811a686', 'scsi-SQEMU_QEMU_HARDDISK_e738e251-d306-48ee-8a06-82586811a686'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e738e251-d306-48ee-8a06-82586811a686-part1', 'scsi-SQEMU_QEMU_HARDDISK_e738e251-d306-48ee-8a06-82586811a686-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e738e251-d306-48ee-8a06-82586811a686-part14', 'scsi-SQEMU_QEMU_HARDDISK_e738e251-d306-48ee-8a06-82586811a686-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e738e251-d306-48ee-8a06-82586811a686-part15', 'scsi-SQEMU_QEMU_HARDDISK_e738e251-d306-48ee-8a06-82586811a686-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e738e251-d306-48ee-8a06-82586811a686-part16', 'scsi-SQEMU_QEMU_HARDDISK_e738e251-d306-48ee-8a06-82586811a686-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.063769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b13833ce-dbae-48be-b135-3251cb983a77', 'scsi-SQEMU_QEMU_HARDDISK_b13833ce-dbae-48be-b135-3251cb983a77'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.063783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d2c5f30c-7574-4db2-b6fd-52c11ffcec81', 'scsi-SQEMU_QEMU_HARDDISK_d2c5f30c-7574-4db2-b6fd-52c11ffcec81'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.063796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dd3ac05d-c575-4080-995d-3bfc9d0012c6', 'scsi-SQEMU_QEMU_HARDDISK_dd3ac05d-c575-4080-995d-3bfc9d0012c6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.063809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-06-00-02-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.063829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--83550523--1175--5b11--b232--63a45b36e32a-osd--block--83550523--1175--5b11--b232--63a45b36e32a', 'dm-uuid-LVM-GgmBurLjrRojbuVdJgmdwztR3neYgf1c7Ki4DK6SlqESws0brjFgjWvn2dL4wKKq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063843 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.063856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2fbee355--69b3--5569--a73a--eae1d5356d34-osd--block--2fbee355--69b3--5569--a73a--eae1d5356d34', 'dm-uuid-LVM-jIAwNtMJkYPhxalyfQIKT0DJEOfCeYi271Yl41nyIgwU7qqsMM4cSNC8JeE5HLt6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a0f4265--dd5d--556c--ac35--a800ef93314e-osd--block--8a0f4265--dd5d--556c--ac35--a800ef93314e', 'dm-uuid-LVM-zuegJs53sNFcEk2Qr78Q7DBNbi7NmCWo8O9bST56x01qFU7kwxSq8ZPjRA11dqOE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--108592b4--5156--5470--952e--be389a9738cf-osd--block--108592b4--5156--5470--952e--be389a9738cf', 'dm-uuid-LVM-xsK2Ofv2ainQ3J0edqln2NvhPmXViG7NeYxpNg2B8MvLMGiCEiECcQx5j0MrUj9q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063959 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.063989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.064008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.064021 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.064034 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.064046 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.064059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.064078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.064091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.064116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0', 'scsi-SQEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0-part1', 'scsi-SQEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0-part14', 'scsi-SQEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0-part15', 'scsi-SQEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0-part16', 'scsi-SQEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.064135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.064149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--83550523--1175--5b11--b232--63a45b36e32a-osd--block--83550523--1175--5b11--b232--63a45b36e32a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MwZmhh-rBzg-zyIr-Vk69-Pm39-fPKX-xz875U', 'scsi-0QEMU_QEMU_HARDDISK_8c0721df-98b6-45a8-8372-f184b99eacbe', 'scsi-SQEMU_QEMU_HARDDISK_8c0721df-98b6-45a8-8372-f184b99eacbe'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.064172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.064186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2fbee355--69b3--5569--a73a--eae1d5356d34-osd--block--2fbee355--69b3--5569--a73a--eae1d5356d34'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-c0zpup-1mYc-QbMy-SPRk-kJl2-ai3v-oQDtTa', 'scsi-0QEMU_QEMU_HARDDISK_cc7f276d-c2ba-4b91-9f6b-a505ec6ab98a', 'scsi-SQEMU_QEMU_HARDDISK_cc7f276d-c2ba-4b91-9f6b-a505ec6ab98a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.064200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e976783-2213-433c-91fb-66c729e68827', 'scsi-SQEMU_QEMU_HARDDISK_7e976783-2213-433c-91fb-66c729e68827'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.064218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-06-00-02-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.064232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8', 'scsi-SQEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8-part1', 'scsi-SQEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8-part14', 'scsi-SQEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8-part15', 'scsi-SQEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8-part16', 'scsi-SQEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.064257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8a0f4265--dd5d--556c--ac35--a800ef93314e-osd--block--8a0f4265--dd5d--556c--ac35--a800ef93314e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cshrKI-P5p2-b0PR-qB7W-hF2D-fccW-9tfpY1', 'scsi-0QEMU_QEMU_HARDDISK_c3e2c64f-9688-4cad-bb81-b3a7d150bd8b', 'scsi-SQEMU_QEMU_HARDDISK_c3e2c64f-9688-4cad-bb81-b3a7d150bd8b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.064271 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.064284 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--108592b4--5156--5470--952e--be389a9738cf-osd--block--108592b4--5156--5470--952e--be389a9738cf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8vkhqM-Fm6b-yUju-i25w-b43v-w3ch-2kYXWZ', 'scsi-0QEMU_QEMU_HARDDISK_bc0c56a8-1377-4a36-857b-86c78b746055', 'scsi-SQEMU_QEMU_HARDDISK_bc0c56a8-1377-4a36-857b-86c78b746055'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.064303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eefa0fb1-6e32-4be6-9371-3c36667f9eb4', 'scsi-SQEMU_QEMU_HARDDISK_eefa0fb1-6e32-4be6-9371-3c36667f9eb4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.064316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-06-00-02-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.064329 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.064342 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5100a9d2--ae69--5e7a--989d--a5d69986fee9-osd--block--5100a9d2--ae69--5e7a--989d--a5d69986fee9', 'dm-uuid-LVM-x3exsVVRoVE9qjt2tke4ynGkNCRUsEUSIQaEXSrD3ztcPcJlyaxi7VTzV2THqjR0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.064446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--376b0c1a--f7d0--50df--9bf6--f05e021d85c5-osd--block--376b0c1a--f7d0--50df--9bf6--f05e021d85c5', 'dm-uuid-LVM-lwatybLHyBWLUDcfTzEaXxgm7hWkw4BeyA07WfaRC32N9BsmxD4KHdMKCrqzZ0dn'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.064464 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.064477 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.064490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.064502 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.064547 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.064563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.064576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.064595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:57:04.064613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247', 'scsi-SQEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247-part1', 'scsi-SQEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247-part14', 'scsi-SQEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247-part15', 'scsi-SQEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247-part16', 'scsi-SQEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.064633 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5100a9d2--ae69--5e7a--989d--a5d69986fee9-osd--block--5100a9d2--ae69--5e7a--989d--a5d69986fee9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D6kocz-QmEq-jdH7-6rqs-amLw-Uefn-wjlZzF', 'scsi-0QEMU_QEMU_HARDDISK_9f4cae81-5600-43ad-ae81-4d2d3f64aa06', 'scsi-SQEMU_QEMU_HARDDISK_9f4cae81-5600-43ad-ae81-4d2d3f64aa06'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.064664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--376b0c1a--f7d0--50df--9bf6--f05e021d85c5-osd--block--376b0c1a--f7d0--50df--9bf6--f05e021d85c5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9vQSwD-XmWH-MgjW-mM9S-SKdR-L0Gp-u9GUq6', 'scsi-0QEMU_QEMU_HARDDISK_a5a4c6fa-807d-44c7-a556-c4522912d679', 'scsi-SQEMU_QEMU_HARDDISK_a5a4c6fa-807d-44c7-a556-c4522912d679'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.064689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f2e4c6c8-e338-4410-96b4-d1d5dab5be16', 'scsi-SQEMU_QEMU_HARDDISK_f2e4c6c8-e338-4410-96b4-d1d5dab5be16'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.064707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-06-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:57:04.064720 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.064733 | orchestrator | 2025-05-06 00:57:04.064746 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-06 00:57:04.064759 | orchestrator | Tuesday 06 May 2025 00:45:02 +0000 (0:00:01.427) 0:00:33.091 *********** 2025-05-06 00:57:04.064774 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.064794 | orchestrator | 2025-05-06 00:57:04.064807 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-06 00:57:04.064819 | orchestrator | Tuesday 06 May 2025 00:45:03 +0000 (0:00:00.452) 0:00:33.544 *********** 2025-05-06 00:57:04.064832 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.064844 | orchestrator | 2025-05-06 00:57:04.064856 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-06 00:57:04.064868 | orchestrator | Tuesday 06 May 2025 00:45:03 +0000 (0:00:00.279) 0:00:33.824 *********** 2025-05-06 00:57:04.064880 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.064893 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.064905 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.064917 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.064929 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.064941 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.064953 | orchestrator | 2025-05-06 00:57:04.064966 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-06 00:57:04.064978 | orchestrator | Tuesday 06 May 2025 00:45:04 +0000 (0:00:00.931) 0:00:34.755 *********** 2025-05-06 00:57:04.064990 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.065003 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.065015 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.065027 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.065040 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.065084 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.065097 | orchestrator | 2025-05-06 00:57:04.065110 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-06 00:57:04.065122 | orchestrator | Tuesday 06 May 2025 00:45:06 +0000 (0:00:01.688) 0:00:36.444 *********** 2025-05-06 00:57:04.065135 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.065147 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.065159 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.065171 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.065184 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.065196 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.065208 | orchestrator | 2025-05-06 00:57:04.065221 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-06 00:57:04.065335 | orchestrator | Tuesday 06 May 2025 00:45:07 +0000 (0:00:00.976) 0:00:37.421 *********** 2025-05-06 00:57:04.065351 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.065369 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.065382 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.065395 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.065407 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.065432 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.065445 | orchestrator | 2025-05-06 00:57:04.065458 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-06 00:57:04.065471 | orchestrator | Tuesday 06 May 2025 00:45:08 +0000 (0:00:01.603) 0:00:39.025 *********** 2025-05-06 00:57:04.065483 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.065495 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.065508 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.065520 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.065533 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.065545 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.065557 | orchestrator | 2025-05-06 00:57:04.065570 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-06 00:57:04.065582 | orchestrator | Tuesday 06 May 2025 00:45:09 +0000 (0:00:00.955) 0:00:39.980 *********** 2025-05-06 00:57:04.065595 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.065607 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.065619 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.065632 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.065644 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.065704 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.065718 | orchestrator | 2025-05-06 00:57:04.065730 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-06 00:57:04.065743 | orchestrator | Tuesday 06 May 2025 00:45:10 +0000 (0:00:01.182) 0:00:41.163 *********** 2025-05-06 00:57:04.065755 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.065773 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.065786 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.065798 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.065810 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.065822 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.065835 | orchestrator | 2025-05-06 00:57:04.065847 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-06 00:57:04.065859 | orchestrator | Tuesday 06 May 2025 00:45:11 +0000 (0:00:00.874) 0:00:42.038 *********** 2025-05-06 00:57:04.065872 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-06 00:57:04.065884 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-06 00:57:04.065897 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-06 00:57:04.065909 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-06 00:57:04.065922 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-06 00:57:04.065934 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.065947 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-06 00:57:04.065959 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-06 00:57:04.065971 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-06 00:57:04.065984 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.066001 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-06 00:57:04.066014 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-06 00:57:04.066121 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.066135 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-06 00:57:04.066154 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-06 00:57:04.066171 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-06 00:57:04.066193 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.066206 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-06 00:57:04.066218 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-06 00:57:04.066230 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-06 00:57:04.066242 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.066255 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-06 00:57:04.066267 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-06 00:57:04.066279 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.066291 | orchestrator | 2025-05-06 00:57:04.066304 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-06 00:57:04.066316 | orchestrator | Tuesday 06 May 2025 00:45:14 +0000 (0:00:02.675) 0:00:44.713 *********** 2025-05-06 00:57:04.066328 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-06 00:57:04.066340 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-06 00:57:04.066352 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-06 00:57:04.066364 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-06 00:57:04.066376 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-06 00:57:04.066388 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-06 00:57:04.066401 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-06 00:57:04.066413 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-06 00:57:04.066425 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-06 00:57:04.066437 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.066449 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.066462 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-06 00:57:04.066474 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.066486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-06 00:57:04.066498 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-06 00:57:04.066511 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-06 00:57:04.066523 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.066536 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-06 00:57:04.066556 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-06 00:57:04.066569 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-06 00:57:04.066581 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-06 00:57:04.066594 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.066606 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-06 00:57:04.066619 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.066631 | orchestrator | 2025-05-06 00:57:04.066643 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-06 00:57:04.066674 | orchestrator | Tuesday 06 May 2025 00:45:17 +0000 (0:00:02.573) 0:00:47.287 *********** 2025-05-06 00:57:04.066687 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-06 00:57:04.066699 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-06 00:57:04.066711 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-06 00:57:04.066724 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-06 00:57:04.066736 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-06 00:57:04.066748 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-06 00:57:04.066760 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-06 00:57:04.066773 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-06 00:57:04.066785 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-06 00:57:04.066797 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-06 00:57:04.066816 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-06 00:57:04.066828 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-06 00:57:04.066840 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-06 00:57:04.066912 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-06 00:57:04.066925 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-06 00:57:04.066938 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-06 00:57:04.066950 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-06 00:57:04.067054 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-06 00:57:04.067069 | orchestrator | 2025-05-06 00:57:04.067081 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-06 00:57:04.067094 | orchestrator | Tuesday 06 May 2025 00:45:22 +0000 (0:00:05.053) 0:00:52.340 *********** 2025-05-06 00:57:04.067106 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-06 00:57:04.067119 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-06 00:57:04.067131 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-06 00:57:04.067143 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-06 00:57:04.067156 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-06 00:57:04.067168 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-06 00:57:04.067180 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.067193 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-06 00:57:04.067205 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-06 00:57:04.067217 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-06 00:57:04.067229 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.067248 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-06 00:57:04.067261 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.067273 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-06 00:57:04.067285 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-06 00:57:04.067298 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-06 00:57:04.067310 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-06 00:57:04.067322 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-06 00:57:04.067334 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.067347 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.067359 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-06 00:57:04.067398 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-06 00:57:04.067412 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-06 00:57:04.067424 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.067437 | orchestrator | 2025-05-06 00:57:04.067449 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-06 00:57:04.067462 | orchestrator | Tuesday 06 May 2025 00:45:23 +0000 (0:00:01.480) 0:00:53.821 *********** 2025-05-06 00:57:04.067496 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-06 00:57:04.067515 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-06 00:57:04.067528 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-06 00:57:04.067540 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.067553 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-06 00:57:04.067607 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-06 00:57:04.067620 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-06 00:57:04.067632 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.067645 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-06 00:57:04.067683 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-06 00:57:04.067696 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-06 00:57:04.067708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-06 00:57:04.067720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-06 00:57:04.067732 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-06 00:57:04.067780 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.067795 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.067808 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-06 00:57:04.067820 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-06 00:57:04.067832 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-06 00:57:04.067844 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-06 00:57:04.067857 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.067869 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-06 00:57:04.067881 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-06 00:57:04.067893 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.067906 | orchestrator | 2025-05-06 00:57:04.067919 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-06 00:57:04.067931 | orchestrator | Tuesday 06 May 2025 00:45:24 +0000 (0:00:00.988) 0:00:54.810 *********** 2025-05-06 00:57:04.067944 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-06 00:57:04.067956 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-06 00:57:04.067969 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-06 00:57:04.067982 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-06 00:57:04.067994 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-05-06 00:57:04.068006 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-06 00:57:04.068019 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-06 00:57:04.068031 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-06 00:57:04.068043 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-05-06 00:57:04.068056 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-06 00:57:04.068068 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-06 00:57:04.068081 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-06 00:57:04.068093 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-06 00:57:04.068105 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-06 00:57:04.068117 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-06 00:57:04.068130 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.068142 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.068154 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-06 00:57:04.068167 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-06 00:57:04.068179 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-06 00:57:04.068192 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.068204 | orchestrator | 2025-05-06 00:57:04.068223 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-06 00:57:04.068236 | orchestrator | Tuesday 06 May 2025 00:45:26 +0000 (0:00:01.587) 0:00:56.398 *********** 2025-05-06 00:57:04.068248 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.068261 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.068273 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.068285 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.068298 | orchestrator | 2025-05-06 00:57:04.068310 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-06 00:57:04.068322 | orchestrator | Tuesday 06 May 2025 00:45:27 +0000 (0:00:01.407) 0:00:57.805 *********** 2025-05-06 00:57:04.068334 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.068347 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.068359 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.068372 | orchestrator | 2025-05-06 00:57:04.068384 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-06 00:57:04.068396 | orchestrator | Tuesday 06 May 2025 00:45:28 +0000 (0:00:00.632) 0:00:58.437 *********** 2025-05-06 00:57:04.068408 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.068420 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.068433 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.068469 | orchestrator | 2025-05-06 00:57:04.068482 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-06 00:57:04.068495 | orchestrator | Tuesday 06 May 2025 00:45:28 +0000 (0:00:00.650) 0:00:59.087 *********** 2025-05-06 00:57:04.068507 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.068519 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.068532 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.068544 | orchestrator | 2025-05-06 00:57:04.068556 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-06 00:57:04.068569 | orchestrator | Tuesday 06 May 2025 00:45:29 +0000 (0:00:00.534) 0:00:59.621 *********** 2025-05-06 00:57:04.068581 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.068593 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.068612 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.068625 | orchestrator | 2025-05-06 00:57:04.068637 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-06 00:57:04.068704 | orchestrator | Tuesday 06 May 2025 00:45:30 +0000 (0:00:00.873) 0:01:00.495 *********** 2025-05-06 00:57:04.068718 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.068731 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.068743 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.068756 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.068768 | orchestrator | 2025-05-06 00:57:04.068780 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-06 00:57:04.068793 | orchestrator | Tuesday 06 May 2025 00:45:30 +0000 (0:00:00.613) 0:01:01.109 *********** 2025-05-06 00:57:04.068805 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.068818 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.068830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.068842 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.068855 | orchestrator | 2025-05-06 00:57:04.068867 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-06 00:57:04.068879 | orchestrator | Tuesday 06 May 2025 00:45:31 +0000 (0:00:01.020) 0:01:02.129 *********** 2025-05-06 00:57:04.068892 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.068904 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.068917 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.068936 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.068955 | orchestrator | 2025-05-06 00:57:04.068967 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-06 00:57:04.068980 | orchestrator | Tuesday 06 May 2025 00:45:33 +0000 (0:00:01.273) 0:01:03.403 *********** 2025-05-06 00:57:04.068992 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.069009 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.069022 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.069034 | orchestrator | 2025-05-06 00:57:04.069047 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-06 00:57:04.069059 | orchestrator | Tuesday 06 May 2025 00:45:33 +0000 (0:00:00.535) 0:01:03.939 *********** 2025-05-06 00:57:04.069072 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-06 00:57:04.069084 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-06 00:57:04.069097 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-06 00:57:04.069110 | orchestrator | 2025-05-06 00:57:04.069122 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-06 00:57:04.069134 | orchestrator | Tuesday 06 May 2025 00:45:34 +0000 (0:00:01.000) 0:01:04.939 *********** 2025-05-06 00:57:04.069147 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.069159 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.069171 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.069183 | orchestrator | 2025-05-06 00:57:04.069196 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-06 00:57:04.069208 | orchestrator | Tuesday 06 May 2025 00:45:35 +0000 (0:00:00.549) 0:01:05.488 *********** 2025-05-06 00:57:04.069218 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.069228 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.069238 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.069248 | orchestrator | 2025-05-06 00:57:04.069258 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-06 00:57:04.069269 | orchestrator | Tuesday 06 May 2025 00:45:35 +0000 (0:00:00.589) 0:01:06.078 *********** 2025-05-06 00:57:04.069279 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-06 00:57:04.069289 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.069299 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-06 00:57:04.069309 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.069320 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-06 00:57:04.069330 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.069340 | orchestrator | 2025-05-06 00:57:04.069350 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-06 00:57:04.069360 | orchestrator | Tuesday 06 May 2025 00:45:36 +0000 (0:00:00.621) 0:01:06.700 *********** 2025-05-06 00:57:04.069370 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-06 00:57:04.069380 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.069391 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-06 00:57:04.069401 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.069411 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-06 00:57:04.069421 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.069432 | orchestrator | 2025-05-06 00:57:04.069445 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-06 00:57:04.069456 | orchestrator | Tuesday 06 May 2025 00:45:37 +0000 (0:00:00.656) 0:01:07.357 *********** 2025-05-06 00:57:04.069466 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.069476 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.069487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.069505 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.069515 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-06 00:57:04.069525 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-06 00:57:04.069536 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-06 00:57:04.069550 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-06 00:57:04.070248 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.070358 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-06 00:57:04.070382 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-06 00:57:04.070401 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.070420 | orchestrator | 2025-05-06 00:57:04.070438 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-06 00:57:04.070457 | orchestrator | Tuesday 06 May 2025 00:45:38 +0000 (0:00:00.991) 0:01:08.348 *********** 2025-05-06 00:57:04.070544 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.070565 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.070584 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.070602 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.070619 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.070637 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.070676 | orchestrator | 2025-05-06 00:57:04.070696 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-06 00:57:04.070713 | orchestrator | Tuesday 06 May 2025 00:45:39 +0000 (0:00:00.971) 0:01:09.319 *********** 2025-05-06 00:57:04.070732 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-06 00:57:04.070751 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-06 00:57:04.070770 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-06 00:57:04.070787 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-06 00:57:04.070805 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-06 00:57:04.070822 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-06 00:57:04.070840 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-06 00:57:04.070858 | orchestrator | 2025-05-06 00:57:04.070912 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-06 00:57:04.070931 | orchestrator | Tuesday 06 May 2025 00:45:40 +0000 (0:00:01.222) 0:01:10.542 *********** 2025-05-06 00:57:04.070947 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-06 00:57:04.070964 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-06 00:57:04.070980 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-06 00:57:04.070996 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-06 00:57:04.071011 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-06 00:57:04.071029 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-06 00:57:04.071076 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-06 00:57:04.071094 | orchestrator | 2025-05-06 00:57:04.071111 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-06 00:57:04.071127 | orchestrator | Tuesday 06 May 2025 00:45:41 +0000 (0:00:01.626) 0:01:12.168 *********** 2025-05-06 00:57:04.071143 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.071160 | orchestrator | 2025-05-06 00:57:04.071177 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-06 00:57:04.071218 | orchestrator | Tuesday 06 May 2025 00:45:43 +0000 (0:00:01.164) 0:01:13.332 *********** 2025-05-06 00:57:04.071280 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.071300 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.071317 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.071334 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.071351 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.071368 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.071385 | orchestrator | 2025-05-06 00:57:04.071401 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-06 00:57:04.071419 | orchestrator | Tuesday 06 May 2025 00:45:43 +0000 (0:00:00.805) 0:01:14.138 *********** 2025-05-06 00:57:04.071435 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.071452 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.071468 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.071482 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.071492 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.071502 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.071512 | orchestrator | 2025-05-06 00:57:04.071522 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-06 00:57:04.071532 | orchestrator | Tuesday 06 May 2025 00:45:45 +0000 (0:00:01.298) 0:01:15.436 *********** 2025-05-06 00:57:04.071542 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.071553 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.071562 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.071572 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.071582 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.071592 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.071602 | orchestrator | 2025-05-06 00:57:04.071635 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-06 00:57:04.071672 | orchestrator | Tuesday 06 May 2025 00:45:46 +0000 (0:00:01.432) 0:01:16.869 *********** 2025-05-06 00:57:04.071694 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.071713 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.071731 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.071750 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.071771 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.071791 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.071805 | orchestrator | 2025-05-06 00:57:04.071815 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-06 00:57:04.071825 | orchestrator | Tuesday 06 May 2025 00:45:47 +0000 (0:00:01.184) 0:01:18.054 *********** 2025-05-06 00:57:04.071835 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.071863 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.071900 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.071912 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.071924 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.071966 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.071986 | orchestrator | 2025-05-06 00:57:04.072011 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-06 00:57:04.072028 | orchestrator | Tuesday 06 May 2025 00:45:48 +0000 (0:00:01.159) 0:01:19.213 *********** 2025-05-06 00:57:04.072044 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.072060 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.072077 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.072093 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.072108 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.072124 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.072140 | orchestrator | 2025-05-06 00:57:04.072157 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-06 00:57:04.072173 | orchestrator | Tuesday 06 May 2025 00:45:49 +0000 (0:00:00.797) 0:01:20.010 *********** 2025-05-06 00:57:04.072191 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.072208 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.072237 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.072252 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.072269 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.072285 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.072300 | orchestrator | 2025-05-06 00:57:04.072316 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-06 00:57:04.072333 | orchestrator | Tuesday 06 May 2025 00:45:51 +0000 (0:00:01.356) 0:01:21.367 *********** 2025-05-06 00:57:04.072349 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.072365 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.072383 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.072400 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.072418 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.072435 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.072451 | orchestrator | 2025-05-06 00:57:04.072467 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-06 00:57:04.072482 | orchestrator | Tuesday 06 May 2025 00:45:51 +0000 (0:00:00.763) 0:01:22.131 *********** 2025-05-06 00:57:04.072497 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.072513 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.072529 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.072544 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.072560 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.072578 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.072594 | orchestrator | 2025-05-06 00:57:04.072611 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-06 00:57:04.072630 | orchestrator | Tuesday 06 May 2025 00:45:52 +0000 (0:00:00.935) 0:01:23.066 *********** 2025-05-06 00:57:04.072663 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.072683 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.072700 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.072723 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.072734 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.072745 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.072755 | orchestrator | 2025-05-06 00:57:04.072765 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-06 00:57:04.072775 | orchestrator | Tuesday 06 May 2025 00:45:53 +0000 (0:00:00.676) 0:01:23.742 *********** 2025-05-06 00:57:04.072785 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.072795 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.072805 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.072815 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.072826 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.072843 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.072859 | orchestrator | 2025-05-06 00:57:04.072877 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-06 00:57:04.072891 | orchestrator | Tuesday 06 May 2025 00:45:54 +0000 (0:00:01.109) 0:01:24.852 *********** 2025-05-06 00:57:04.072901 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.072912 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.072922 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.072932 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.072942 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.072952 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.072962 | orchestrator | 2025-05-06 00:57:04.072972 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-06 00:57:04.072982 | orchestrator | Tuesday 06 May 2025 00:45:55 +0000 (0:00:00.651) 0:01:25.503 *********** 2025-05-06 00:57:04.072993 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.073003 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.073013 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.073028 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.073046 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.073073 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.073101 | orchestrator | 2025-05-06 00:57:04.073116 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-06 00:57:04.073126 | orchestrator | Tuesday 06 May 2025 00:45:56 +0000 (0:00:00.830) 0:01:26.333 *********** 2025-05-06 00:57:04.073136 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.073147 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.073157 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.073167 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.073177 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.073187 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.073197 | orchestrator | 2025-05-06 00:57:04.073208 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-06 00:57:04.073217 | orchestrator | Tuesday 06 May 2025 00:45:56 +0000 (0:00:00.515) 0:01:26.849 *********** 2025-05-06 00:57:04.073227 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.073237 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.073247 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.073257 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.073267 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.073277 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.073287 | orchestrator | 2025-05-06 00:57:04.073297 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-06 00:57:04.073337 | orchestrator | Tuesday 06 May 2025 00:45:57 +0000 (0:00:00.652) 0:01:27.501 *********** 2025-05-06 00:57:04.073355 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.073373 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.073386 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.073396 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.073406 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.073436 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.073455 | orchestrator | 2025-05-06 00:57:04.073473 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-06 00:57:04.073536 | orchestrator | Tuesday 06 May 2025 00:45:57 +0000 (0:00:00.656) 0:01:28.157 *********** 2025-05-06 00:57:04.073553 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.073563 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.073575 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.073592 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.073630 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.073642 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.073723 | orchestrator | 2025-05-06 00:57:04.073736 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-06 00:57:04.073746 | orchestrator | Tuesday 06 May 2025 00:45:58 +0000 (0:00:00.507) 0:01:28.665 *********** 2025-05-06 00:57:04.073757 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.073767 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.073777 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.073787 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.073797 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.073807 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.073816 | orchestrator | 2025-05-06 00:57:04.073826 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-06 00:57:04.073836 | orchestrator | Tuesday 06 May 2025 00:45:59 +0000 (0:00:00.738) 0:01:29.403 *********** 2025-05-06 00:57:04.073847 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.073856 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.073867 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.073877 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.073893 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.073911 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.073926 | orchestrator | 2025-05-06 00:57:04.073937 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-06 00:57:04.073947 | orchestrator | Tuesday 06 May 2025 00:45:59 +0000 (0:00:00.605) 0:01:30.008 *********** 2025-05-06 00:57:04.074082 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.074099 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.074110 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.074120 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.074130 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.074140 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.074150 | orchestrator | 2025-05-06 00:57:04.074160 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-06 00:57:04.074182 | orchestrator | Tuesday 06 May 2025 00:46:00 +0000 (0:00:00.815) 0:01:30.824 *********** 2025-05-06 00:57:04.074193 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.074203 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.074224 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.074234 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.074250 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.074267 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.074286 | orchestrator | 2025-05-06 00:57:04.074297 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-06 00:57:04.074307 | orchestrator | Tuesday 06 May 2025 00:46:01 +0000 (0:00:00.638) 0:01:31.463 *********** 2025-05-06 00:57:04.074317 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.074332 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.074342 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.074352 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.074362 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.074372 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.074382 | orchestrator | 2025-05-06 00:57:04.074392 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-06 00:57:04.074402 | orchestrator | Tuesday 06 May 2025 00:46:02 +0000 (0:00:00.913) 0:01:32.376 *********** 2025-05-06 00:57:04.074412 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.074422 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.074432 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.074442 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.074451 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.074461 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.074471 | orchestrator | 2025-05-06 00:57:04.074481 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-06 00:57:04.074491 | orchestrator | Tuesday 06 May 2025 00:46:03 +0000 (0:00:00.966) 0:01:33.342 *********** 2025-05-06 00:57:04.074501 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.074511 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.074521 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.074531 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.074541 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.074550 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.074560 | orchestrator | 2025-05-06 00:57:04.074570 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-06 00:57:04.074580 | orchestrator | Tuesday 06 May 2025 00:46:04 +0000 (0:00:00.983) 0:01:34.325 *********** 2025-05-06 00:57:04.074590 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.074600 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.074610 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.074620 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.074630 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.074640 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.074671 | orchestrator | 2025-05-06 00:57:04.074682 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-06 00:57:04.074692 | orchestrator | Tuesday 06 May 2025 00:46:04 +0000 (0:00:00.597) 0:01:34.922 *********** 2025-05-06 00:57:04.074702 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.074712 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.074722 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.074740 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.074750 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.074760 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.074770 | orchestrator | 2025-05-06 00:57:04.074785 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-06 00:57:04.074802 | orchestrator | Tuesday 06 May 2025 00:46:05 +0000 (0:00:00.798) 0:01:35.721 *********** 2025-05-06 00:57:04.074818 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.074828 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.074838 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.074848 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.074858 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.074868 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.074878 | orchestrator | 2025-05-06 00:57:04.074888 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-06 00:57:04.074899 | orchestrator | Tuesday 06 May 2025 00:46:06 +0000 (0:00:00.636) 0:01:36.357 *********** 2025-05-06 00:57:04.074909 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.074919 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.074929 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.074939 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.074949 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.074959 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.074968 | orchestrator | 2025-05-06 00:57:04.074979 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-06 00:57:04.074989 | orchestrator | Tuesday 06 May 2025 00:46:06 +0000 (0:00:00.773) 0:01:37.130 *********** 2025-05-06 00:57:04.074999 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.075009 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.075019 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.075029 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.075039 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.075049 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.075058 | orchestrator | 2025-05-06 00:57:04.075068 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-06 00:57:04.075079 | orchestrator | Tuesday 06 May 2025 00:46:07 +0000 (0:00:00.673) 0:01:37.804 *********** 2025-05-06 00:57:04.075089 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.075099 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.075108 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.075118 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.075133 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.075143 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.075153 | orchestrator | 2025-05-06 00:57:04.075163 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-06 00:57:04.075173 | orchestrator | Tuesday 06 May 2025 00:46:08 +0000 (0:00:00.880) 0:01:38.685 *********** 2025-05-06 00:57:04.075183 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.075193 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.075203 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.075213 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.075223 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.075239 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.075249 | orchestrator | 2025-05-06 00:57:04.075259 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-06 00:57:04.075269 | orchestrator | Tuesday 06 May 2025 00:46:09 +0000 (0:00:00.607) 0:01:39.293 *********** 2025-05-06 00:57:04.075279 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.075289 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.075299 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.075309 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.075319 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.075334 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.075344 | orchestrator | 2025-05-06 00:57:04.075354 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-06 00:57:04.075365 | orchestrator | Tuesday 06 May 2025 00:46:09 +0000 (0:00:00.782) 0:01:40.075 *********** 2025-05-06 00:57:04.075375 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-06 00:57:04.075385 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-06 00:57:04.075395 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.075405 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-06 00:57:04.075415 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-06 00:57:04.075425 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.075435 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-06 00:57:04.075447 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-06 00:57:04.075464 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.075482 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-06 00:57:04.075507 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-06 00:57:04.075523 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.075540 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-06 00:57:04.075556 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-06 00:57:04.075573 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.075590 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-06 00:57:04.075613 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-06 00:57:04.075631 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.075666 | orchestrator | 2025-05-06 00:57:04.075685 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-06 00:57:04.075703 | orchestrator | Tuesday 06 May 2025 00:46:10 +0000 (0:00:00.750) 0:01:40.826 *********** 2025-05-06 00:57:04.075721 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-06 00:57:04.075737 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-06 00:57:04.075749 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.075766 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-06 00:57:04.075784 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-06 00:57:04.075801 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.075819 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-06 00:57:04.075835 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-06 00:57:04.075852 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.075869 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-06 00:57:04.075887 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-06 00:57:04.075904 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-06 00:57:04.075921 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-06 00:57:04.075939 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.075955 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.075965 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-06 00:57:04.075975 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-06 00:57:04.075993 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.076010 | orchestrator | 2025-05-06 00:57:04.076027 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-06 00:57:04.076043 | orchestrator | Tuesday 06 May 2025 00:46:11 +0000 (0:00:01.138) 0:01:41.965 *********** 2025-05-06 00:57:04.076059 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.076077 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.076093 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.076110 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.076125 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.076150 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.076166 | orchestrator | 2025-05-06 00:57:04.076182 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-06 00:57:04.076197 | orchestrator | Tuesday 06 May 2025 00:46:12 +0000 (0:00:00.670) 0:01:42.635 *********** 2025-05-06 00:57:04.076212 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.076227 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.076243 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.076258 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.076273 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.076290 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.076305 | orchestrator | 2025-05-06 00:57:04.076321 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-06 00:57:04.076337 | orchestrator | Tuesday 06 May 2025 00:46:13 +0000 (0:00:00.800) 0:01:43.436 *********** 2025-05-06 00:57:04.076352 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.076368 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.076383 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.076397 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.076413 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.076428 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.076444 | orchestrator | 2025-05-06 00:57:04.076461 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-06 00:57:04.076476 | orchestrator | Tuesday 06 May 2025 00:46:13 +0000 (0:00:00.623) 0:01:44.060 *********** 2025-05-06 00:57:04.076492 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.076508 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.076537 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.076556 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.076573 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.076590 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.076607 | orchestrator | 2025-05-06 00:57:04.076624 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-06 00:57:04.076641 | orchestrator | Tuesday 06 May 2025 00:46:14 +0000 (0:00:00.860) 0:01:44.921 *********** 2025-05-06 00:57:04.076720 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.076747 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.076765 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.076782 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.076800 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.076817 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.076833 | orchestrator | 2025-05-06 00:57:04.076857 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-06 00:57:04.076874 | orchestrator | Tuesday 06 May 2025 00:46:15 +0000 (0:00:00.620) 0:01:45.541 *********** 2025-05-06 00:57:04.076892 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.076910 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.076927 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.076945 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.076963 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.076980 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.076998 | orchestrator | 2025-05-06 00:57:04.077015 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-06 00:57:04.077033 | orchestrator | Tuesday 06 May 2025 00:46:16 +0000 (0:00:01.170) 0:01:46.712 *********** 2025-05-06 00:57:04.077051 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-06 00:57:04.077069 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-06 00:57:04.077088 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-06 00:57:04.077105 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.077122 | orchestrator | 2025-05-06 00:57:04.077140 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-06 00:57:04.077170 | orchestrator | Tuesday 06 May 2025 00:46:16 +0000 (0:00:00.503) 0:01:47.216 *********** 2025-05-06 00:57:04.077188 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-06 00:57:04.077205 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-06 00:57:04.077222 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-06 00:57:04.077239 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.077256 | orchestrator | 2025-05-06 00:57:04.077273 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-06 00:57:04.077290 | orchestrator | Tuesday 06 May 2025 00:46:17 +0000 (0:00:00.448) 0:01:47.665 *********** 2025-05-06 00:57:04.077307 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-06 00:57:04.077324 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-06 00:57:04.077341 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-06 00:57:04.077359 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.077376 | orchestrator | 2025-05-06 00:57:04.077393 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-06 00:57:04.077410 | orchestrator | Tuesday 06 May 2025 00:46:17 +0000 (0:00:00.563) 0:01:48.228 *********** 2025-05-06 00:57:04.077427 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.077445 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.077462 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.077478 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.077495 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.077512 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.077529 | orchestrator | 2025-05-06 00:57:04.077547 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-06 00:57:04.077564 | orchestrator | Tuesday 06 May 2025 00:46:18 +0000 (0:00:00.638) 0:01:48.866 *********** 2025-05-06 00:57:04.077581 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-06 00:57:04.077598 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-06 00:57:04.077616 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.077633 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.077666 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-06 00:57:04.077684 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.077702 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-06 00:57:04.077719 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.077737 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-06 00:57:04.077755 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.077771 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-06 00:57:04.077788 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.077805 | orchestrator | 2025-05-06 00:57:04.077822 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-06 00:57:04.077840 | orchestrator | Tuesday 06 May 2025 00:46:19 +0000 (0:00:01.093) 0:01:49.960 *********** 2025-05-06 00:57:04.077857 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.077874 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.077890 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.077907 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.077924 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.077942 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.077958 | orchestrator | 2025-05-06 00:57:04.077975 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-06 00:57:04.077992 | orchestrator | Tuesday 06 May 2025 00:46:20 +0000 (0:00:00.572) 0:01:50.532 *********** 2025-05-06 00:57:04.078009 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.078071 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.078088 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.078105 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.078122 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.078150 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.078168 | orchestrator | 2025-05-06 00:57:04.078185 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-06 00:57:04.078231 | orchestrator | Tuesday 06 May 2025 00:46:21 +0000 (0:00:00.804) 0:01:51.337 *********** 2025-05-06 00:57:04.078249 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-06 00:57:04.078266 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.078284 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-06 00:57:04.078301 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.078318 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-06 00:57:04.078335 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.078353 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-06 00:57:04.078370 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.078388 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-06 00:57:04.078405 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.078420 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-06 00:57:04.078438 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.078455 | orchestrator | 2025-05-06 00:57:04.078473 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-06 00:57:04.078560 | orchestrator | Tuesday 06 May 2025 00:46:22 +0000 (0:00:00.939) 0:01:52.276 *********** 2025-05-06 00:57:04.078578 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.078594 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.078612 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.078630 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-06 00:57:04.078676 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.078703 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-06 00:57:04.078721 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.078738 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-06 00:57:04.078756 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.078772 | orchestrator | 2025-05-06 00:57:04.078790 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-06 00:57:04.078807 | orchestrator | Tuesday 06 May 2025 00:46:22 +0000 (0:00:00.814) 0:01:53.091 *********** 2025-05-06 00:57:04.078825 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-06 00:57:04.078842 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-06 00:57:04.078859 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-06 00:57:04.078876 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.078893 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-06 00:57:04.078910 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-06 00:57:04.078928 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-06 00:57:04.078947 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.078977 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-06 00:57:04.078995 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-06 00:57:04.079012 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-06 00:57:04.079030 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.079047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.079065 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.079082 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.079100 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-06 00:57:04.079118 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-06 00:57:04.079146 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.079163 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-06 00:57:04.079180 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.079198 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-06 00:57:04.079215 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-06 00:57:04.079233 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-06 00:57:04.079250 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.079266 | orchestrator | 2025-05-06 00:57:04.079283 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-06 00:57:04.079300 | orchestrator | Tuesday 06 May 2025 00:46:24 +0000 (0:00:01.733) 0:01:54.824 *********** 2025-05-06 00:57:04.079317 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.079331 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.079348 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.079364 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.079380 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.079396 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.079412 | orchestrator | 2025-05-06 00:57:04.079429 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-06 00:57:04.079446 | orchestrator | Tuesday 06 May 2025 00:46:25 +0000 (0:00:01.322) 0:01:56.147 *********** 2025-05-06 00:57:04.079463 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.079479 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.079496 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.079513 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-06 00:57:04.079531 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.079548 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-06 00:57:04.079565 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.079582 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-06 00:57:04.079599 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.079615 | orchestrator | 2025-05-06 00:57:04.079632 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-06 00:57:04.079672 | orchestrator | Tuesday 06 May 2025 00:46:27 +0000 (0:00:01.383) 0:01:57.531 *********** 2025-05-06 00:57:04.079691 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.079731 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.079749 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.079766 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.079784 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.079800 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.079816 | orchestrator | 2025-05-06 00:57:04.079832 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-06 00:57:04.079848 | orchestrator | Tuesday 06 May 2025 00:46:28 +0000 (0:00:01.242) 0:01:58.773 *********** 2025-05-06 00:57:04.079863 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.079880 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.079898 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.079914 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.079930 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.079946 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.079962 | orchestrator | 2025-05-06 00:57:04.079980 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-05-06 00:57:04.079996 | orchestrator | Tuesday 06 May 2025 00:46:29 +0000 (0:00:01.280) 0:02:00.054 *********** 2025-05-06 00:57:04.080161 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.080186 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.080203 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.080221 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.080237 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.080251 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.080275 | orchestrator | 2025-05-06 00:57:04.080293 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-05-06 00:57:04.080308 | orchestrator | Tuesday 06 May 2025 00:46:31 +0000 (0:00:01.411) 0:02:01.465 *********** 2025-05-06 00:57:04.080323 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.080336 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.080351 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.080364 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.080381 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.080395 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.080409 | orchestrator | 2025-05-06 00:57:04.080422 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-05-06 00:57:04.080437 | orchestrator | Tuesday 06 May 2025 00:46:33 +0000 (0:00:02.608) 0:02:04.074 *********** 2025-05-06 00:57:04.080453 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.080467 | orchestrator | 2025-05-06 00:57:04.080481 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-05-06 00:57:04.080495 | orchestrator | Tuesday 06 May 2025 00:46:35 +0000 (0:00:01.193) 0:02:05.267 *********** 2025-05-06 00:57:04.080508 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.080523 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.080537 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.080552 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.080566 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.080580 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.080595 | orchestrator | 2025-05-06 00:57:04.080609 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-05-06 00:57:04.080624 | orchestrator | Tuesday 06 May 2025 00:46:35 +0000 (0:00:00.775) 0:02:06.043 *********** 2025-05-06 00:57:04.080639 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.080709 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.080726 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.080740 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.080760 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.080774 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.080788 | orchestrator | 2025-05-06 00:57:04.080803 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-05-06 00:57:04.080816 | orchestrator | Tuesday 06 May 2025 00:46:36 +0000 (0:00:00.587) 0:02:06.630 *********** 2025-05-06 00:57:04.080830 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-06 00:57:04.080845 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-06 00:57:04.080858 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-06 00:57:04.080872 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-06 00:57:04.080885 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-06 00:57:04.080899 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-06 00:57:04.080913 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-06 00:57:04.080928 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-06 00:57:04.080943 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-06 00:57:04.080956 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-06 00:57:04.080971 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-06 00:57:04.080985 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-06 00:57:04.081011 | orchestrator | 2025-05-06 00:57:04.081027 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-05-06 00:57:04.081042 | orchestrator | Tuesday 06 May 2025 00:46:37 +0000 (0:00:01.506) 0:02:08.137 *********** 2025-05-06 00:57:04.081072 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.081099 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.081114 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.081129 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.081144 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.081160 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.081176 | orchestrator | 2025-05-06 00:57:04.081204 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-05-06 00:57:04.081220 | orchestrator | Tuesday 06 May 2025 00:46:38 +0000 (0:00:00.995) 0:02:09.132 *********** 2025-05-06 00:57:04.081235 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.081250 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.081265 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.081280 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.081295 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.081310 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.081325 | orchestrator | 2025-05-06 00:57:04.081341 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-05-06 00:57:04.081356 | orchestrator | Tuesday 06 May 2025 00:46:40 +0000 (0:00:01.185) 0:02:10.317 *********** 2025-05-06 00:57:04.081372 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.081387 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.081402 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.081417 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.081432 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.081447 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.081463 | orchestrator | 2025-05-06 00:57:04.081478 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-05-06 00:57:04.081493 | orchestrator | Tuesday 06 May 2025 00:46:40 +0000 (0:00:00.861) 0:02:11.179 *********** 2025-05-06 00:57:04.081509 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.081525 | orchestrator | 2025-05-06 00:57:04.081540 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-05-06 00:57:04.081556 | orchestrator | Tuesday 06 May 2025 00:46:42 +0000 (0:00:01.235) 0:02:12.415 *********** 2025-05-06 00:57:04.081571 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.081587 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.081602 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.081617 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.081632 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.081730 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.081749 | orchestrator | 2025-05-06 00:57:04.081772 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-05-06 00:57:04.081787 | orchestrator | Tuesday 06 May 2025 00:47:26 +0000 (0:00:44.490) 0:02:56.905 *********** 2025-05-06 00:57:04.081802 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-06 00:57:04.081817 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-06 00:57:04.081832 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-06 00:57:04.081848 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.081864 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-06 00:57:04.081879 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-06 00:57:04.081895 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-06 00:57:04.081909 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.081935 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-06 00:57:04.081950 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-06 00:57:04.081965 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-06 00:57:04.081999 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.082180 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-06 00:57:04.082206 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-06 00:57:04.082223 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-06 00:57:04.082240 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.082255 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-06 00:57:04.082270 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-06 00:57:04.082285 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-06 00:57:04.082301 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.082316 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-06 00:57:04.082331 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-06 00:57:04.082409 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-06 00:57:04.082428 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.082442 | orchestrator | 2025-05-06 00:57:04.082456 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-05-06 00:57:04.082470 | orchestrator | Tuesday 06 May 2025 00:47:27 +0000 (0:00:00.770) 0:02:57.676 *********** 2025-05-06 00:57:04.082484 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.082498 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.082512 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.082526 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.082552 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.082566 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.082580 | orchestrator | 2025-05-06 00:57:04.082594 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-05-06 00:57:04.082617 | orchestrator | Tuesday 06 May 2025 00:47:28 +0000 (0:00:00.597) 0:02:58.273 *********** 2025-05-06 00:57:04.082631 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.082646 | orchestrator | 2025-05-06 00:57:04.082678 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-05-06 00:57:04.082794 | orchestrator | Tuesday 06 May 2025 00:47:28 +0000 (0:00:00.177) 0:02:58.450 *********** 2025-05-06 00:57:04.082815 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.082828 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.082841 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.082854 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.082867 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.082881 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.082893 | orchestrator | 2025-05-06 00:57:04.082952 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-05-06 00:57:04.082966 | orchestrator | Tuesday 06 May 2025 00:47:28 +0000 (0:00:00.759) 0:02:59.210 *********** 2025-05-06 00:57:04.082979 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.082992 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.083006 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.083019 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.083043 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.083056 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.083080 | orchestrator | 2025-05-06 00:57:04.083094 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-05-06 00:57:04.083107 | orchestrator | Tuesday 06 May 2025 00:47:29 +0000 (0:00:00.512) 0:02:59.723 *********** 2025-05-06 00:57:04.083133 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.083147 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.083161 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.083174 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.083187 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.083212 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.083226 | orchestrator | 2025-05-06 00:57:04.083240 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-05-06 00:57:04.083258 | orchestrator | Tuesday 06 May 2025 00:47:30 +0000 (0:00:00.764) 0:03:00.487 *********** 2025-05-06 00:57:04.083272 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.083286 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.083299 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.083313 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.083326 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.083339 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.083352 | orchestrator | 2025-05-06 00:57:04.083365 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-05-06 00:57:04.083379 | orchestrator | Tuesday 06 May 2025 00:47:31 +0000 (0:00:01.597) 0:03:02.085 *********** 2025-05-06 00:57:04.083393 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.083407 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.083420 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.083432 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.083445 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.083458 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.083471 | orchestrator | 2025-05-06 00:57:04.083484 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-05-06 00:57:04.083498 | orchestrator | Tuesday 06 May 2025 00:47:32 +0000 (0:00:00.810) 0:03:02.895 *********** 2025-05-06 00:57:04.083511 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.083526 | orchestrator | 2025-05-06 00:57:04.083539 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-05-06 00:57:04.083552 | orchestrator | Tuesday 06 May 2025 00:47:33 +0000 (0:00:01.059) 0:03:03.954 *********** 2025-05-06 00:57:04.083566 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.083580 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.083593 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.083606 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.083620 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.083634 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.083669 | orchestrator | 2025-05-06 00:57:04.083683 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-05-06 00:57:04.083697 | orchestrator | Tuesday 06 May 2025 00:47:34 +0000 (0:00:00.540) 0:03:04.495 *********** 2025-05-06 00:57:04.083711 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.083725 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.083738 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.083752 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.083766 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.083781 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.083794 | orchestrator | 2025-05-06 00:57:04.083808 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-05-06 00:57:04.083822 | orchestrator | Tuesday 06 May 2025 00:47:35 +0000 (0:00:00.994) 0:03:05.489 *********** 2025-05-06 00:57:04.083836 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.083849 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.083863 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.083877 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.083890 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.083904 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.083917 | orchestrator | 2025-05-06 00:57:04.083941 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-05-06 00:57:04.083955 | orchestrator | Tuesday 06 May 2025 00:47:36 +0000 (0:00:00.745) 0:03:06.235 *********** 2025-05-06 00:57:04.083969 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.083983 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.083997 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.084010 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.084024 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.084038 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.084051 | orchestrator | 2025-05-06 00:57:04.084065 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-05-06 00:57:04.084079 | orchestrator | Tuesday 06 May 2025 00:47:37 +0000 (0:00:01.018) 0:03:07.253 *********** 2025-05-06 00:57:04.084093 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.084107 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.084121 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.084135 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.084148 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.084162 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.084176 | orchestrator | 2025-05-06 00:57:04.084298 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-05-06 00:57:04.084318 | orchestrator | Tuesday 06 May 2025 00:47:37 +0000 (0:00:00.671) 0:03:07.924 *********** 2025-05-06 00:57:04.084332 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.084345 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.084357 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.084377 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.084390 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.084459 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.084479 | orchestrator | 2025-05-06 00:57:04.084494 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-05-06 00:57:04.084509 | orchestrator | Tuesday 06 May 2025 00:47:38 +0000 (0:00:01.007) 0:03:08.932 *********** 2025-05-06 00:57:04.084525 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.084540 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.084557 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.084573 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.084589 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.084605 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.084621 | orchestrator | 2025-05-06 00:57:04.084637 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-05-06 00:57:04.084708 | orchestrator | Tuesday 06 May 2025 00:47:39 +0000 (0:00:00.621) 0:03:09.554 *********** 2025-05-06 00:57:04.084725 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.084740 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.084755 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.084770 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.084784 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.084799 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.084813 | orchestrator | 2025-05-06 00:57:04.084828 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-06 00:57:04.084842 | orchestrator | Tuesday 06 May 2025 00:47:40 +0000 (0:00:01.143) 0:03:10.697 *********** 2025-05-06 00:57:04.084855 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.084867 | orchestrator | 2025-05-06 00:57:04.084880 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-05-06 00:57:04.084893 | orchestrator | Tuesday 06 May 2025 00:47:41 +0000 (0:00:01.011) 0:03:11.709 *********** 2025-05-06 00:57:04.084907 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-06 00:57:04.084920 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-06 00:57:04.084951 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-06 00:57:04.084964 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-06 00:57:04.084976 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-06 00:57:04.084989 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-06 00:57:04.085001 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-06 00:57:04.085014 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-06 00:57:04.085026 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-06 00:57:04.085038 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-06 00:57:04.085051 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-06 00:57:04.085063 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-06 00:57:04.085076 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-06 00:57:04.085088 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-06 00:57:04.085101 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-06 00:57:04.085114 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-06 00:57:04.085126 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-06 00:57:04.085139 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-06 00:57:04.085152 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-06 00:57:04.085164 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-06 00:57:04.085177 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-06 00:57:04.085189 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-06 00:57:04.085202 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-06 00:57:04.085214 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-06 00:57:04.085227 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-06 00:57:04.085240 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-06 00:57:04.085252 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-06 00:57:04.085265 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-06 00:57:04.085276 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-06 00:57:04.085287 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-06 00:57:04.085300 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-06 00:57:04.085312 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-06 00:57:04.085325 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-06 00:57:04.085338 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-06 00:57:04.085350 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-06 00:57:04.085363 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-06 00:57:04.085382 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-06 00:57:04.085395 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-06 00:57:04.085408 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-06 00:57:04.085515 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-06 00:57:04.085531 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-06 00:57:04.085543 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-06 00:57:04.085556 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-06 00:57:04.085569 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-06 00:57:04.085583 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-06 00:57:04.085638 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-06 00:57:04.085678 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-06 00:57:04.085692 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-06 00:57:04.085705 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-06 00:57:04.085718 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-06 00:57:04.085730 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-06 00:57:04.085744 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-06 00:57:04.085756 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-06 00:57:04.085769 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-06 00:57:04.085782 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-06 00:57:04.085795 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-06 00:57:04.085808 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-06 00:57:04.085820 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-06 00:57:04.085833 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-06 00:57:04.085846 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-06 00:57:04.085859 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-06 00:57:04.085872 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-06 00:57:04.085885 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-06 00:57:04.085898 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-06 00:57:04.085910 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-06 00:57:04.085923 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-06 00:57:04.085936 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-06 00:57:04.085949 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-06 00:57:04.085962 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-06 00:57:04.085975 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-06 00:57:04.085987 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-06 00:57:04.086000 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-06 00:57:04.086013 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-06 00:57:04.086054 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-06 00:57:04.086068 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-06 00:57:04.086083 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-06 00:57:04.086096 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-06 00:57:04.086109 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-06 00:57:04.086121 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-06 00:57:04.086134 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-06 00:57:04.086150 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-06 00:57:04.086167 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-06 00:57:04.086182 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-06 00:57:04.086198 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-06 00:57:04.086214 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-06 00:57:04.086231 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-06 00:57:04.086254 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-06 00:57:04.086271 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-06 00:57:04.086288 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-06 00:57:04.086304 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-06 00:57:04.086320 | orchestrator | 2025-05-06 00:57:04.086337 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-06 00:57:04.086359 | orchestrator | Tuesday 06 May 2025 00:47:47 +0000 (0:00:05.948) 0:03:17.657 *********** 2025-05-06 00:57:04.086376 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.086392 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.086409 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.086425 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.086442 | orchestrator | 2025-05-06 00:57:04.086531 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-05-06 00:57:04.086549 | orchestrator | Tuesday 06 May 2025 00:47:48 +0000 (0:00:01.044) 0:03:18.702 *********** 2025-05-06 00:57:04.086562 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-06 00:57:04.086575 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-06 00:57:04.086587 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-06 00:57:04.086599 | orchestrator | 2025-05-06 00:57:04.086665 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-05-06 00:57:04.086683 | orchestrator | Tuesday 06 May 2025 00:47:49 +0000 (0:00:01.134) 0:03:19.837 *********** 2025-05-06 00:57:04.086695 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-06 00:57:04.086707 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-06 00:57:04.086719 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-06 00:57:04.086732 | orchestrator | 2025-05-06 00:57:04.086745 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-06 00:57:04.086758 | orchestrator | Tuesday 06 May 2025 00:47:50 +0000 (0:00:01.217) 0:03:21.054 *********** 2025-05-06 00:57:04.086771 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.086784 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.086797 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.086810 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.086823 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.086836 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.086849 | orchestrator | 2025-05-06 00:57:04.086862 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-06 00:57:04.086876 | orchestrator | Tuesday 06 May 2025 00:47:51 +0000 (0:00:00.792) 0:03:21.847 *********** 2025-05-06 00:57:04.086889 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.086902 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.086916 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.086929 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.086942 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.086955 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.086968 | orchestrator | 2025-05-06 00:57:04.086981 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-06 00:57:04.086994 | orchestrator | Tuesday 06 May 2025 00:47:52 +0000 (0:00:00.615) 0:03:22.462 *********** 2025-05-06 00:57:04.087007 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.087029 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.087043 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.087056 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.087069 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.087081 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.087094 | orchestrator | 2025-05-06 00:57:04.087107 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-06 00:57:04.087120 | orchestrator | Tuesday 06 May 2025 00:47:53 +0000 (0:00:00.854) 0:03:23.317 *********** 2025-05-06 00:57:04.087132 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.087147 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.087163 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.087179 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.087193 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.087209 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.087224 | orchestrator | 2025-05-06 00:57:04.087239 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-06 00:57:04.087253 | orchestrator | Tuesday 06 May 2025 00:47:53 +0000 (0:00:00.588) 0:03:23.905 *********** 2025-05-06 00:57:04.087266 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.087278 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.087291 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.087304 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.087317 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.087330 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.087343 | orchestrator | 2025-05-06 00:57:04.087356 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-06 00:57:04.087369 | orchestrator | Tuesday 06 May 2025 00:47:54 +0000 (0:00:00.828) 0:03:24.734 *********** 2025-05-06 00:57:04.087382 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.087394 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.087407 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.087420 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.087440 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.087453 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.087465 | orchestrator | 2025-05-06 00:57:04.087478 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-06 00:57:04.087491 | orchestrator | Tuesday 06 May 2025 00:47:55 +0000 (0:00:00.709) 0:03:25.443 *********** 2025-05-06 00:57:04.087504 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.087517 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.087530 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.087542 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.087555 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.087567 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.087580 | orchestrator | 2025-05-06 00:57:04.087593 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-06 00:57:04.087606 | orchestrator | Tuesday 06 May 2025 00:47:56 +0000 (0:00:00.916) 0:03:26.359 *********** 2025-05-06 00:57:04.087722 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.087741 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.087754 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.087767 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.087780 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.087793 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.087849 | orchestrator | 2025-05-06 00:57:04.087864 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-06 00:57:04.087877 | orchestrator | Tuesday 06 May 2025 00:47:56 +0000 (0:00:00.587) 0:03:26.947 *********** 2025-05-06 00:57:04.087890 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.087903 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.087926 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.087939 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.087952 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.087965 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.087977 | orchestrator | 2025-05-06 00:57:04.087990 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-06 00:57:04.088003 | orchestrator | Tuesday 06 May 2025 00:47:59 +0000 (0:00:02.360) 0:03:29.307 *********** 2025-05-06 00:57:04.088015 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.088029 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.088042 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.088055 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.088068 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.088081 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.088093 | orchestrator | 2025-05-06 00:57:04.088106 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-06 00:57:04.088119 | orchestrator | Tuesday 06 May 2025 00:47:59 +0000 (0:00:00.674) 0:03:29.981 *********** 2025-05-06 00:57:04.088132 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-06 00:57:04.088145 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-06 00:57:04.088158 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.088171 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-06 00:57:04.088189 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-06 00:57:04.088202 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.088215 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-06 00:57:04.088228 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-06 00:57:04.088241 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.088254 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-06 00:57:04.088266 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-06 00:57:04.088279 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.088292 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-06 00:57:04.088304 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-06 00:57:04.088317 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.088331 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-06 00:57:04.088343 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-06 00:57:04.088356 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.088368 | orchestrator | 2025-05-06 00:57:04.088382 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-06 00:57:04.088397 | orchestrator | Tuesday 06 May 2025 00:48:00 +0000 (0:00:01.003) 0:03:30.985 *********** 2025-05-06 00:57:04.088412 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-06 00:57:04.088427 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-06 00:57:04.088439 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.088451 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-06 00:57:04.088462 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-06 00:57:04.088472 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.088483 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-06 00:57:04.088495 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-06 00:57:04.088506 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.088518 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-05-06 00:57:04.088531 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-05-06 00:57:04.088541 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-05-06 00:57:04.088552 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-05-06 00:57:04.088565 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-05-06 00:57:04.088577 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-05-06 00:57:04.088588 | orchestrator | 2025-05-06 00:57:04.088600 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-06 00:57:04.088621 | orchestrator | Tuesday 06 May 2025 00:48:01 +0000 (0:00:00.643) 0:03:31.629 *********** 2025-05-06 00:57:04.088632 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.088643 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.088709 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.088721 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.088732 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.088745 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.088756 | orchestrator | 2025-05-06 00:57:04.088768 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-06 00:57:04.088780 | orchestrator | Tuesday 06 May 2025 00:48:02 +0000 (0:00:00.888) 0:03:32.517 *********** 2025-05-06 00:57:04.088792 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.088803 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.088814 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.088826 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.088838 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.088849 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.088861 | orchestrator | 2025-05-06 00:57:04.088873 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-06 00:57:04.088885 | orchestrator | Tuesday 06 May 2025 00:48:03 +0000 (0:00:00.714) 0:03:33.231 *********** 2025-05-06 00:57:04.088897 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.088908 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.089032 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.089065 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.089077 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.089094 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.089105 | orchestrator | 2025-05-06 00:57:04.089115 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-06 00:57:04.089127 | orchestrator | Tuesday 06 May 2025 00:48:04 +0000 (0:00:01.022) 0:03:34.253 *********** 2025-05-06 00:57:04.089209 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.089226 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.089242 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.089258 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.089274 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.089289 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.089303 | orchestrator | 2025-05-06 00:57:04.089326 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-06 00:57:04.089343 | orchestrator | Tuesday 06 May 2025 00:48:04 +0000 (0:00:00.614) 0:03:34.868 *********** 2025-05-06 00:57:04.089357 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.089372 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.089387 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.089403 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.089418 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.089432 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.089448 | orchestrator | 2025-05-06 00:57:04.089463 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-06 00:57:04.089479 | orchestrator | Tuesday 06 May 2025 00:48:05 +0000 (0:00:00.932) 0:03:35.801 *********** 2025-05-06 00:57:04.089492 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.089507 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.089521 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.089534 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.089548 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.089562 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.089576 | orchestrator | 2025-05-06 00:57:04.089590 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-06 00:57:04.089603 | orchestrator | Tuesday 06 May 2025 00:48:06 +0000 (0:00:00.741) 0:03:36.542 *********** 2025-05-06 00:57:04.089629 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-06 00:57:04.089672 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-06 00:57:04.089683 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-06 00:57:04.089693 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.089704 | orchestrator | 2025-05-06 00:57:04.089718 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-06 00:57:04.089736 | orchestrator | Tuesday 06 May 2025 00:48:06 +0000 (0:00:00.667) 0:03:37.210 *********** 2025-05-06 00:57:04.089754 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-06 00:57:04.089773 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-06 00:57:04.089790 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-06 00:57:04.089808 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.089824 | orchestrator | 2025-05-06 00:57:04.089841 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-06 00:57:04.089858 | orchestrator | Tuesday 06 May 2025 00:48:07 +0000 (0:00:00.738) 0:03:37.948 *********** 2025-05-06 00:57:04.089875 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-06 00:57:04.089893 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-06 00:57:04.089910 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-06 00:57:04.089926 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.089943 | orchestrator | 2025-05-06 00:57:04.089954 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-06 00:57:04.089965 | orchestrator | Tuesday 06 May 2025 00:48:08 +0000 (0:00:00.477) 0:03:38.426 *********** 2025-05-06 00:57:04.089975 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.089985 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.089995 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.090006 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.090041 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.090055 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.090066 | orchestrator | 2025-05-06 00:57:04.090077 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-06 00:57:04.090088 | orchestrator | Tuesday 06 May 2025 00:48:08 +0000 (0:00:00.771) 0:03:39.198 *********** 2025-05-06 00:57:04.090098 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-06 00:57:04.090110 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.090121 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-06 00:57:04.090132 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.090144 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-06 00:57:04.090157 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.090168 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-06 00:57:04.090180 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-06 00:57:04.090192 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-06 00:57:04.090204 | orchestrator | 2025-05-06 00:57:04.090215 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-06 00:57:04.090227 | orchestrator | Tuesday 06 May 2025 00:48:10 +0000 (0:00:01.552) 0:03:40.750 *********** 2025-05-06 00:57:04.090240 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.090253 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.090265 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.090276 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.090288 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.090302 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.090315 | orchestrator | 2025-05-06 00:57:04.090326 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-06 00:57:04.090337 | orchestrator | Tuesday 06 May 2025 00:48:11 +0000 (0:00:00.612) 0:03:41.362 *********** 2025-05-06 00:57:04.090348 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.090372 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.090383 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.090395 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.090504 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.090523 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.090534 | orchestrator | 2025-05-06 00:57:04.090545 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-06 00:57:04.090555 | orchestrator | Tuesday 06 May 2025 00:48:11 +0000 (0:00:00.857) 0:03:42.220 *********** 2025-05-06 00:57:04.090565 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-06 00:57:04.090576 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.090585 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-06 00:57:04.090595 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.090606 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-06 00:57:04.090616 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.090637 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-06 00:57:04.090702 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.090717 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-06 00:57:04.090727 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.090737 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-06 00:57:04.090749 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.090760 | orchestrator | 2025-05-06 00:57:04.090772 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-06 00:57:04.090784 | orchestrator | Tuesday 06 May 2025 00:48:12 +0000 (0:00:00.717) 0:03:42.937 *********** 2025-05-06 00:57:04.090794 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.090805 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.090816 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.090828 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-06 00:57:04.090838 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.090848 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-06 00:57:04.090858 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.090868 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-06 00:57:04.090878 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.090888 | orchestrator | 2025-05-06 00:57:04.090899 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-06 00:57:04.090909 | orchestrator | Tuesday 06 May 2025 00:48:13 +0000 (0:00:00.801) 0:03:43.739 *********** 2025-05-06 00:57:04.090925 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-06 00:57:04.090936 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-06 00:57:04.090946 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-06 00:57:04.090957 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.090967 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-06 00:57:04.090977 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-06 00:57:04.090986 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-06 00:57:04.090996 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.091006 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-06 00:57:04.091016 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-06 00:57:04.091028 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-06 00:57:04.091041 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.091055 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.091068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.091090 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.091103 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.091115 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-06 00:57:04.091128 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-06 00:57:04.091140 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-06 00:57:04.091149 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-06 00:57:04.091162 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-06 00:57:04.091174 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.091187 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-06 00:57:04.091200 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.091211 | orchestrator | 2025-05-06 00:57:04.091223 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-06 00:57:04.091232 | orchestrator | Tuesday 06 May 2025 00:48:15 +0000 (0:00:01.627) 0:03:45.367 *********** 2025-05-06 00:57:04.091244 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.091255 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.091268 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.091281 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.091294 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.091307 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.091319 | orchestrator | 2025-05-06 00:57:04.091332 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-06 00:57:04.091346 | orchestrator | Tuesday 06 May 2025 00:48:19 +0000 (0:00:04.287) 0:03:49.654 *********** 2025-05-06 00:57:04.091360 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.091371 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.091384 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.091396 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.091409 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.091421 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.091433 | orchestrator | 2025-05-06 00:57:04.091445 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-06 00:57:04.091459 | orchestrator | Tuesday 06 May 2025 00:48:20 +0000 (0:00:00.999) 0:03:50.654 *********** 2025-05-06 00:57:04.091572 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.091587 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.091596 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.091609 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:57:04.091621 | orchestrator | 2025-05-06 00:57:04.091633 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-06 00:57:04.091790 | orchestrator | Tuesday 06 May 2025 00:48:21 +0000 (0:00:00.948) 0:03:51.602 *********** 2025-05-06 00:57:04.091813 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.091827 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.091839 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.091852 | orchestrator | 2025-05-06 00:57:04.091876 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-05-06 00:57:04.091891 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.091905 | orchestrator | 2025-05-06 00:57:04.091919 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-06 00:57:04.091933 | orchestrator | Tuesday 06 May 2025 00:48:22 +0000 (0:00:00.896) 0:03:52.499 *********** 2025-05-06 00:57:04.091947 | orchestrator | 2025-05-06 00:57:04.091961 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-05-06 00:57:04.091974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.091989 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.092002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.092023 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.092037 | orchestrator | 2025-05-06 00:57:04.092052 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-06 00:57:04.092066 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.092081 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.092096 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.092109 | orchestrator | 2025-05-06 00:57:04.092123 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-06 00:57:04.092138 | orchestrator | Tuesday 06 May 2025 00:48:23 +0000 (0:00:01.141) 0:03:53.641 *********** 2025-05-06 00:57:04.092152 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-06 00:57:04.092171 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-06 00:57:04.092186 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-06 00:57:04.092200 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.092215 | orchestrator | 2025-05-06 00:57:04.092229 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-06 00:57:04.092243 | orchestrator | Tuesday 06 May 2025 00:48:24 +0000 (0:00:00.684) 0:03:54.325 *********** 2025-05-06 00:57:04.092255 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.092269 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.092282 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.092296 | orchestrator | 2025-05-06 00:57:04.092309 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-05-06 00:57:04.092323 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.092337 | orchestrator | 2025-05-06 00:57:04.092350 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-06 00:57:04.092360 | orchestrator | Tuesday 06 May 2025 00:48:24 +0000 (0:00:00.601) 0:03:54.927 *********** 2025-05-06 00:57:04.092373 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.092387 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.092398 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.092416 | orchestrator | 2025-05-06 00:57:04.092433 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-05-06 00:57:04.092448 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.092463 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.092479 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.092495 | orchestrator | 2025-05-06 00:57:04.092510 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-06 00:57:04.092526 | orchestrator | Tuesday 06 May 2025 00:48:25 +0000 (0:00:00.521) 0:03:55.448 *********** 2025-05-06 00:57:04.092542 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.092557 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.092572 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.092597 | orchestrator | 2025-05-06 00:57:04.092613 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-05-06 00:57:04.092629 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.092645 | orchestrator | 2025-05-06 00:57:04.092690 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-06 00:57:04.092707 | orchestrator | Tuesday 06 May 2025 00:48:25 +0000 (0:00:00.747) 0:03:56.196 *********** 2025-05-06 00:57:04.092723 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.092737 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.092751 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.092765 | orchestrator | 2025-05-06 00:57:04.092780 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-05-06 00:57:04.092795 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.092808 | orchestrator | 2025-05-06 00:57:04.092822 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-06 00:57:04.092837 | orchestrator | Tuesday 06 May 2025 00:48:26 +0000 (0:00:00.841) 0:03:57.037 *********** 2025-05-06 00:57:04.092851 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.092878 | orchestrator | 2025-05-06 00:57:04.092892 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-06 00:57:04.092906 | orchestrator | Tuesday 06 May 2025 00:48:26 +0000 (0:00:00.159) 0:03:57.197 *********** 2025-05-06 00:57:04.092916 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.092926 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.092936 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.092946 | orchestrator | 2025-05-06 00:57:04.092958 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-05-06 00:57:04.092969 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.092982 | orchestrator | 2025-05-06 00:57:04.092994 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-06 00:57:04.093120 | orchestrator | Tuesday 06 May 2025 00:48:27 +0000 (0:00:00.797) 0:03:57.995 *********** 2025-05-06 00:57:04.093136 | orchestrator | 2025-05-06 00:57:04.093145 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-05-06 00:57:04.093156 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.093170 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:57:04.093183 | orchestrator | 2025-05-06 00:57:04.093196 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-06 00:57:04.093207 | orchestrator | Tuesday 06 May 2025 00:48:28 +0000 (0:00:00.881) 0:03:58.876 *********** 2025-05-06 00:57:04.093279 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.093297 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.093318 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.093336 | orchestrator | 2025-05-06 00:57:04.093352 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-05-06 00:57:04.093372 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.093391 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.093406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.093427 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.093445 | orchestrator | 2025-05-06 00:57:04.093460 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-06 00:57:04.093487 | orchestrator | Tuesday 06 May 2025 00:48:29 +0000 (0:00:01.141) 0:04:00.018 *********** 2025-05-06 00:57:04.093502 | orchestrator | 2025-05-06 00:57:04.093520 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-05-06 00:57:04.093539 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.093552 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.093568 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.093587 | orchestrator | 2025-05-06 00:57:04.093604 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-06 00:57:04.093617 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.093634 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.093669 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.093678 | orchestrator | 2025-05-06 00:57:04.093689 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-06 00:57:04.093701 | orchestrator | Tuesday 06 May 2025 00:48:31 +0000 (0:00:01.462) 0:04:01.481 *********** 2025-05-06 00:57:04.093718 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-06 00:57:04.093729 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-06 00:57:04.093745 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-06 00:57:04.093761 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.093778 | orchestrator | 2025-05-06 00:57:04.093793 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-06 00:57:04.093806 | orchestrator | Tuesday 06 May 2025 00:48:32 +0000 (0:00:00.829) 0:04:02.311 *********** 2025-05-06 00:57:04.093820 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.093836 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.093863 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.093877 | orchestrator | 2025-05-06 00:57:04.093893 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-05-06 00:57:04.093909 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.093924 | orchestrator | 2025-05-06 00:57:04.093937 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-06 00:57:04.093952 | orchestrator | Tuesday 06 May 2025 00:48:33 +0000 (0:00:01.067) 0:04:03.379 *********** 2025-05-06 00:57:04.093968 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.093979 | orchestrator | 2025-05-06 00:57:04.093993 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-06 00:57:04.094007 | orchestrator | Tuesday 06 May 2025 00:48:33 +0000 (0:00:00.565) 0:04:03.944 *********** 2025-05-06 00:57:04.094053 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.094071 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.094087 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.094101 | orchestrator | 2025-05-06 00:57:04.094111 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-05-06 00:57:04.094122 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.094133 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.094146 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.094165 | orchestrator | 2025-05-06 00:57:04.094186 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-06 00:57:04.094205 | orchestrator | Tuesday 06 May 2025 00:48:34 +0000 (0:00:01.233) 0:04:05.178 *********** 2025-05-06 00:57:04.094222 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.094237 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.094255 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.094274 | orchestrator | 2025-05-06 00:57:04.094290 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-06 00:57:04.094307 | orchestrator | Tuesday 06 May 2025 00:48:36 +0000 (0:00:01.206) 0:04:06.384 *********** 2025-05-06 00:57:04.094325 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.094342 | orchestrator | 2025-05-06 00:57:04.094358 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-06 00:57:04.094376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.094393 | orchestrator | 2025-05-06 00:57:04.094410 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-06 00:57:04.094428 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.094445 | orchestrator | 2025-05-06 00:57:04.094458 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-06 00:57:04.094469 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.094482 | orchestrator | 2025-05-06 00:57:04.094502 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-06 00:57:04.094522 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.094537 | orchestrator | 2025-05-06 00:57:04.094548 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-06 00:57:04.094558 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.094684 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.094708 | orchestrator | 2025-05-06 00:57:04.094721 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-06 00:57:04.094735 | orchestrator | Tuesday 06 May 2025 00:48:37 +0000 (0:00:01.567) 0:04:07.953 *********** 2025-05-06 00:57:04.094747 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.094818 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.094833 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.094847 | orchestrator | 2025-05-06 00:57:04.094862 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-06 00:57:04.094875 | orchestrator | Tuesday 06 May 2025 00:48:38 +0000 (0:00:01.240) 0:04:09.193 *********** 2025-05-06 00:57:04.094888 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.094914 | orchestrator | 2025-05-06 00:57:04.094926 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-06 00:57:04.094938 | orchestrator | Tuesday 06 May 2025 00:48:39 +0000 (0:00:00.640) 0:04:09.834 *********** 2025-05-06 00:57:04.094951 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.094962 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.094977 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.094989 | orchestrator | 2025-05-06 00:57:04.095010 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-06 00:57:04.095029 | orchestrator | Tuesday 06 May 2025 00:48:40 +0000 (0:00:00.444) 0:04:10.278 *********** 2025-05-06 00:57:04.095045 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.095128 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.095150 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.095166 | orchestrator | 2025-05-06 00:57:04.095183 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-06 00:57:04.095203 | orchestrator | Tuesday 06 May 2025 00:48:41 +0000 (0:00:01.140) 0:04:11.419 *********** 2025-05-06 00:57:04.095222 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.095250 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.095267 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.095281 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.095300 | orchestrator | 2025-05-06 00:57:04.095318 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-06 00:57:04.095333 | orchestrator | Tuesday 06 May 2025 00:48:41 +0000 (0:00:00.591) 0:04:12.010 *********** 2025-05-06 00:57:04.095348 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.095367 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.095384 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.095409 | orchestrator | 2025-05-06 00:57:04.095426 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-06 00:57:04.095437 | orchestrator | Tuesday 06 May 2025 00:48:42 +0000 (0:00:00.281) 0:04:12.291 *********** 2025-05-06 00:57:04.095453 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.095465 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.095481 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.095498 | orchestrator | 2025-05-06 00:57:04.095513 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-06 00:57:04.095526 | orchestrator | Tuesday 06 May 2025 00:48:42 +0000 (0:00:00.291) 0:04:12.583 *********** 2025-05-06 00:57:04.095541 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.095558 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.095572 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.095584 | orchestrator | 2025-05-06 00:57:04.095600 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-06 00:57:04.095615 | orchestrator | Tuesday 06 May 2025 00:48:42 +0000 (0:00:00.410) 0:04:12.993 *********** 2025-05-06 00:57:04.095628 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.095641 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.095668 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.095677 | orchestrator | 2025-05-06 00:57:04.095687 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-06 00:57:04.095703 | orchestrator | Tuesday 06 May 2025 00:48:43 +0000 (0:00:00.276) 0:04:13.269 *********** 2025-05-06 00:57:04.095716 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.095729 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.095743 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.095755 | orchestrator | 2025-05-06 00:57:04.095764 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-06 00:57:04.095778 | orchestrator | 2025-05-06 00:57:04.095791 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-06 00:57:04.095819 | orchestrator | Tuesday 06 May 2025 00:48:44 +0000 (0:00:01.891) 0:04:15.161 *********** 2025-05-06 00:57:04.095834 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:57:04.095847 | orchestrator | 2025-05-06 00:57:04.095860 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-06 00:57:04.095874 | orchestrator | Tuesday 06 May 2025 00:48:45 +0000 (0:00:00.754) 0:04:15.916 *********** 2025-05-06 00:57:04.095887 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.095899 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.095912 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.095925 | orchestrator | 2025-05-06 00:57:04.095938 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-06 00:57:04.095949 | orchestrator | Tuesday 06 May 2025 00:48:46 +0000 (0:00:00.758) 0:04:16.675 *********** 2025-05-06 00:57:04.095962 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.095976 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.095988 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.096002 | orchestrator | 2025-05-06 00:57:04.096014 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-06 00:57:04.096027 | orchestrator | Tuesday 06 May 2025 00:48:47 +0000 (0:00:00.641) 0:04:17.316 *********** 2025-05-06 00:57:04.096039 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.096053 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.096065 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.096078 | orchestrator | 2025-05-06 00:57:04.096203 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-06 00:57:04.096219 | orchestrator | Tuesday 06 May 2025 00:48:47 +0000 (0:00:00.368) 0:04:17.684 *********** 2025-05-06 00:57:04.096228 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.096237 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.096252 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.096268 | orchestrator | 2025-05-06 00:57:04.096281 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-06 00:57:04.096291 | orchestrator | Tuesday 06 May 2025 00:48:47 +0000 (0:00:00.359) 0:04:18.044 *********** 2025-05-06 00:57:04.096347 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.096359 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.096370 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.096382 | orchestrator | 2025-05-06 00:57:04.096394 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-06 00:57:04.096406 | orchestrator | Tuesday 06 May 2025 00:48:48 +0000 (0:00:00.769) 0:04:18.814 *********** 2025-05-06 00:57:04.096418 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.096429 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.096441 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.096453 | orchestrator | 2025-05-06 00:57:04.096466 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-06 00:57:04.096477 | orchestrator | Tuesday 06 May 2025 00:48:49 +0000 (0:00:00.597) 0:04:19.411 *********** 2025-05-06 00:57:04.096489 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.096500 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.096511 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.096523 | orchestrator | 2025-05-06 00:57:04.096534 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-06 00:57:04.096545 | orchestrator | Tuesday 06 May 2025 00:48:49 +0000 (0:00:00.326) 0:04:19.738 *********** 2025-05-06 00:57:04.096557 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.096568 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.096579 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.096590 | orchestrator | 2025-05-06 00:57:04.096605 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-06 00:57:04.096621 | orchestrator | Tuesday 06 May 2025 00:48:49 +0000 (0:00:00.334) 0:04:20.072 *********** 2025-05-06 00:57:04.096676 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.096692 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.096709 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.096730 | orchestrator | 2025-05-06 00:57:04.096746 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-06 00:57:04.096778 | orchestrator | Tuesday 06 May 2025 00:48:50 +0000 (0:00:00.371) 0:04:20.443 *********** 2025-05-06 00:57:04.096793 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.096813 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.096829 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.096843 | orchestrator | 2025-05-06 00:57:04.096862 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-06 00:57:04.096878 | orchestrator | Tuesday 06 May 2025 00:48:50 +0000 (0:00:00.542) 0:04:20.986 *********** 2025-05-06 00:57:04.096890 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.096909 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.096926 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.096939 | orchestrator | 2025-05-06 00:57:04.096955 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-06 00:57:04.096973 | orchestrator | Tuesday 06 May 2025 00:48:51 +0000 (0:00:00.615) 0:04:21.602 *********** 2025-05-06 00:57:04.096987 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.097001 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.097018 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.097033 | orchestrator | 2025-05-06 00:57:04.097045 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-06 00:57:04.097062 | orchestrator | Tuesday 06 May 2025 00:48:51 +0000 (0:00:00.280) 0:04:21.883 *********** 2025-05-06 00:57:04.097073 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.097089 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.097112 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.097129 | orchestrator | 2025-05-06 00:57:04.097144 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-06 00:57:04.097157 | orchestrator | Tuesday 06 May 2025 00:48:51 +0000 (0:00:00.305) 0:04:22.189 *********** 2025-05-06 00:57:04.097169 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.097177 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.097186 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.097195 | orchestrator | 2025-05-06 00:57:04.097209 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-06 00:57:04.097222 | orchestrator | Tuesday 06 May 2025 00:48:52 +0000 (0:00:00.404) 0:04:22.593 *********** 2025-05-06 00:57:04.097237 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.097253 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.097267 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.097280 | orchestrator | 2025-05-06 00:57:04.097296 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-06 00:57:04.097312 | orchestrator | Tuesday 06 May 2025 00:48:52 +0000 (0:00:00.357) 0:04:22.950 *********** 2025-05-06 00:57:04.097324 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.097338 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.097353 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.097365 | orchestrator | 2025-05-06 00:57:04.097376 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-06 00:57:04.097389 | orchestrator | Tuesday 06 May 2025 00:48:53 +0000 (0:00:00.451) 0:04:23.402 *********** 2025-05-06 00:57:04.097402 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.097416 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.097431 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.097444 | orchestrator | 2025-05-06 00:57:04.097457 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-06 00:57:04.097469 | orchestrator | Tuesday 06 May 2025 00:48:53 +0000 (0:00:00.296) 0:04:23.699 *********** 2025-05-06 00:57:04.097482 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.097507 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.097521 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.097533 | orchestrator | 2025-05-06 00:57:04.097667 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-06 00:57:04.097681 | orchestrator | Tuesday 06 May 2025 00:48:53 +0000 (0:00:00.452) 0:04:24.151 *********** 2025-05-06 00:57:04.097696 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.097711 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.097726 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.097794 | orchestrator | 2025-05-06 00:57:04.097809 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-06 00:57:04.097823 | orchestrator | Tuesday 06 May 2025 00:48:54 +0000 (0:00:00.268) 0:04:24.420 *********** 2025-05-06 00:57:04.097835 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.097849 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.097863 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.097876 | orchestrator | 2025-05-06 00:57:04.097889 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-06 00:57:04.097900 | orchestrator | Tuesday 06 May 2025 00:48:54 +0000 (0:00:00.258) 0:04:24.678 *********** 2025-05-06 00:57:04.097909 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.097919 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.097928 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.097939 | orchestrator | 2025-05-06 00:57:04.097950 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-06 00:57:04.097960 | orchestrator | Tuesday 06 May 2025 00:48:54 +0000 (0:00:00.242) 0:04:24.921 *********** 2025-05-06 00:57:04.097972 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.097983 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.097996 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.098009 | orchestrator | 2025-05-06 00:57:04.098048 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-06 00:57:04.098064 | orchestrator | Tuesday 06 May 2025 00:48:55 +0000 (0:00:00.389) 0:04:25.311 *********** 2025-05-06 00:57:04.098079 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.098092 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.098103 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.098113 | orchestrator | 2025-05-06 00:57:04.098123 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-06 00:57:04.098136 | orchestrator | Tuesday 06 May 2025 00:48:55 +0000 (0:00:00.260) 0:04:25.572 *********** 2025-05-06 00:57:04.098149 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.098164 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.098177 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.098189 | orchestrator | 2025-05-06 00:57:04.098201 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-06 00:57:04.098214 | orchestrator | Tuesday 06 May 2025 00:48:55 +0000 (0:00:00.299) 0:04:25.872 *********** 2025-05-06 00:57:04.098227 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.098244 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.098265 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.098276 | orchestrator | 2025-05-06 00:57:04.098290 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-06 00:57:04.098320 | orchestrator | Tuesday 06 May 2025 00:48:55 +0000 (0:00:00.303) 0:04:26.175 *********** 2025-05-06 00:57:04.098340 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.098362 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.098382 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.098398 | orchestrator | 2025-05-06 00:57:04.098418 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-06 00:57:04.098438 | orchestrator | Tuesday 06 May 2025 00:48:56 +0000 (0:00:00.417) 0:04:26.593 *********** 2025-05-06 00:57:04.098455 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.098472 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.098511 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.098527 | orchestrator | 2025-05-06 00:57:04.098542 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-06 00:57:04.098562 | orchestrator | Tuesday 06 May 2025 00:48:56 +0000 (0:00:00.245) 0:04:26.838 *********** 2025-05-06 00:57:04.098581 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.098598 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.098613 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.098630 | orchestrator | 2025-05-06 00:57:04.098691 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-06 00:57:04.098711 | orchestrator | Tuesday 06 May 2025 00:48:56 +0000 (0:00:00.313) 0:04:27.152 *********** 2025-05-06 00:57:04.098735 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.098746 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.098760 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.098776 | orchestrator | 2025-05-06 00:57:04.098790 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-06 00:57:04.098803 | orchestrator | Tuesday 06 May 2025 00:48:57 +0000 (0:00:00.279) 0:04:27.432 *********** 2025-05-06 00:57:04.098816 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.098830 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.098852 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.098866 | orchestrator | 2025-05-06 00:57:04.098878 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-06 00:57:04.098889 | orchestrator | Tuesday 06 May 2025 00:48:57 +0000 (0:00:00.414) 0:04:27.846 *********** 2025-05-06 00:57:04.098904 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.098918 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.098929 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.098940 | orchestrator | 2025-05-06 00:57:04.098954 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-06 00:57:04.098969 | orchestrator | Tuesday 06 May 2025 00:48:57 +0000 (0:00:00.270) 0:04:28.117 *********** 2025-05-06 00:57:04.098980 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.098989 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.099001 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.099013 | orchestrator | 2025-05-06 00:57:04.099025 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-06 00:57:04.099039 | orchestrator | Tuesday 06 May 2025 00:48:58 +0000 (0:00:00.281) 0:04:28.399 *********** 2025-05-06 00:57:04.099158 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-06 00:57:04.099174 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-06 00:57:04.099183 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.099191 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-06 00:57:04.099202 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-06 00:57:04.099218 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.099231 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-06 00:57:04.099245 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-06 00:57:04.099258 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.099322 | orchestrator | 2025-05-06 00:57:04.099335 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-06 00:57:04.099345 | orchestrator | Tuesday 06 May 2025 00:48:58 +0000 (0:00:00.310) 0:04:28.709 *********** 2025-05-06 00:57:04.099356 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-06 00:57:04.099369 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-06 00:57:04.099381 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.099393 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-06 00:57:04.099404 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-06 00:57:04.099416 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.099442 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-06 00:57:04.099455 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-06 00:57:04.099467 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.099476 | orchestrator | 2025-05-06 00:57:04.099485 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-06 00:57:04.099493 | orchestrator | Tuesday 06 May 2025 00:48:58 +0000 (0:00:00.466) 0:04:29.176 *********** 2025-05-06 00:57:04.099501 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.099512 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.099521 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.099531 | orchestrator | 2025-05-06 00:57:04.099541 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-06 00:57:04.099553 | orchestrator | Tuesday 06 May 2025 00:48:59 +0000 (0:00:00.302) 0:04:29.479 *********** 2025-05-06 00:57:04.099563 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.099574 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.099585 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.099598 | orchestrator | 2025-05-06 00:57:04.099609 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-06 00:57:04.099620 | orchestrator | Tuesday 06 May 2025 00:48:59 +0000 (0:00:00.300) 0:04:29.780 *********** 2025-05-06 00:57:04.099631 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.099642 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.099672 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.099683 | orchestrator | 2025-05-06 00:57:04.099695 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-06 00:57:04.099706 | orchestrator | Tuesday 06 May 2025 00:48:59 +0000 (0:00:00.382) 0:04:30.162 *********** 2025-05-06 00:57:04.099718 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.099729 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.099738 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.099748 | orchestrator | 2025-05-06 00:57:04.099759 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-06 00:57:04.099773 | orchestrator | Tuesday 06 May 2025 00:49:00 +0000 (0:00:00.683) 0:04:30.846 *********** 2025-05-06 00:57:04.099785 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.099803 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.099818 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.099834 | orchestrator | 2025-05-06 00:57:04.099857 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-06 00:57:04.099877 | orchestrator | Tuesday 06 May 2025 00:49:01 +0000 (0:00:00.384) 0:04:31.231 *********** 2025-05-06 00:57:04.099890 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.099910 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.099925 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.099940 | orchestrator | 2025-05-06 00:57:04.099958 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-06 00:57:04.099969 | orchestrator | Tuesday 06 May 2025 00:49:01 +0000 (0:00:00.392) 0:04:31.623 *********** 2025-05-06 00:57:04.099988 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-06 00:57:04.100005 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-06 00:57:04.100018 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-06 00:57:04.100036 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.100051 | orchestrator | 2025-05-06 00:57:04.100058 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-06 00:57:04.100066 | orchestrator | Tuesday 06 May 2025 00:49:01 +0000 (0:00:00.516) 0:04:32.140 *********** 2025-05-06 00:57:04.100111 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-06 00:57:04.100128 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-06 00:57:04.100152 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-06 00:57:04.100170 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.100187 | orchestrator | 2025-05-06 00:57:04.100198 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-06 00:57:04.100214 | orchestrator | Tuesday 06 May 2025 00:49:02 +0000 (0:00:00.724) 0:04:32.864 *********** 2025-05-06 00:57:04.100229 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-06 00:57:04.100242 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-06 00:57:04.100255 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-06 00:57:04.100271 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.100282 | orchestrator | 2025-05-06 00:57:04.100296 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-06 00:57:04.100417 | orchestrator | Tuesday 06 May 2025 00:49:03 +0000 (0:00:00.830) 0:04:332025-05-06 00:57:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:57:04.100433 | orchestrator | 2025-05-06 00:57:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:57:04.100445 | orchestrator | .695 *********** 2025-05-06 00:57:04.100459 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.100473 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.100486 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.100499 | orchestrator | 2025-05-06 00:57:04.100512 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-06 00:57:04.100520 | orchestrator | Tuesday 06 May 2025 00:49:03 +0000 (0:00:00.398) 0:04:34.093 *********** 2025-05-06 00:57:04.100527 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-06 00:57:04.100535 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.100543 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-06 00:57:04.100602 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.100615 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-06 00:57:04.100628 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.100640 | orchestrator | 2025-05-06 00:57:04.100670 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-06 00:57:04.100685 | orchestrator | Tuesday 06 May 2025 00:49:04 +0000 (0:00:00.473) 0:04:34.566 *********** 2025-05-06 00:57:04.100699 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.100713 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.100727 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.100741 | orchestrator | 2025-05-06 00:57:04.100754 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-06 00:57:04.100767 | orchestrator | Tuesday 06 May 2025 00:49:04 +0000 (0:00:00.271) 0:04:34.838 *********** 2025-05-06 00:57:04.100779 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.100790 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.100803 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.100815 | orchestrator | 2025-05-06 00:57:04.100826 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-06 00:57:04.100839 | orchestrator | Tuesday 06 May 2025 00:49:05 +0000 (0:00:00.438) 0:04:35.276 *********** 2025-05-06 00:57:04.100852 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-06 00:57:04.100863 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.100875 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-06 00:57:04.100889 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.100901 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-06 00:57:04.100914 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.100926 | orchestrator | 2025-05-06 00:57:04.100939 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-06 00:57:04.100950 | orchestrator | Tuesday 06 May 2025 00:49:05 +0000 (0:00:00.506) 0:04:35.783 *********** 2025-05-06 00:57:04.100964 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.100977 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.101011 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.101024 | orchestrator | 2025-05-06 00:57:04.101035 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-06 00:57:04.101048 | orchestrator | Tuesday 06 May 2025 00:49:05 +0000 (0:00:00.318) 0:04:36.101 *********** 2025-05-06 00:57:04.101061 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-06 00:57:04.101072 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-06 00:57:04.101084 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-06 00:57:04.101097 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-06 00:57:04.101106 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-06 00:57:04.101116 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-06 00:57:04.101124 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.101132 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.101141 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-06 00:57:04.101155 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-06 00:57:04.101166 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-06 00:57:04.101176 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.101188 | orchestrator | 2025-05-06 00:57:04.101199 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-06 00:57:04.101210 | orchestrator | Tuesday 06 May 2025 00:49:06 +0000 (0:00:00.757) 0:04:36.859 *********** 2025-05-06 00:57:04.101222 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.101232 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.101243 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.101253 | orchestrator | 2025-05-06 00:57:04.101265 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-06 00:57:04.101295 | orchestrator | Tuesday 06 May 2025 00:49:07 +0000 (0:00:00.498) 0:04:37.358 *********** 2025-05-06 00:57:04.101305 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.101316 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.101327 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.101338 | orchestrator | 2025-05-06 00:57:04.101348 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-06 00:57:04.101359 | orchestrator | Tuesday 06 May 2025 00:49:07 +0000 (0:00:00.653) 0:04:38.011 *********** 2025-05-06 00:57:04.101369 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.101381 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.101406 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.101425 | orchestrator | 2025-05-06 00:57:04.101435 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-06 00:57:04.101446 | orchestrator | Tuesday 06 May 2025 00:49:08 +0000 (0:00:00.562) 0:04:38.573 *********** 2025-05-06 00:57:04.101465 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.101478 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.101498 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.101515 | orchestrator | 2025-05-06 00:57:04.101528 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-05-06 00:57:04.101589 | orchestrator | Tuesday 06 May 2025 00:49:08 +0000 (0:00:00.593) 0:04:39.167 *********** 2025-05-06 00:57:04.101608 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.101620 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.101636 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.101665 | orchestrator | 2025-05-06 00:57:04.101673 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-05-06 00:57:04.101681 | orchestrator | Tuesday 06 May 2025 00:49:09 +0000 (0:00:00.281) 0:04:39.449 *********** 2025-05-06 00:57:04.101688 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:57:04.101698 | orchestrator | 2025-05-06 00:57:04.101710 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-05-06 00:57:04.101738 | orchestrator | Tuesday 06 May 2025 00:49:09 +0000 (0:00:00.478) 0:04:39.927 *********** 2025-05-06 00:57:04.101754 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.101770 | orchestrator | 2025-05-06 00:57:04.101782 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-05-06 00:57:04.101797 | orchestrator | Tuesday 06 May 2025 00:49:09 +0000 (0:00:00.113) 0:04:40.041 *********** 2025-05-06 00:57:04.101814 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-06 00:57:04.101828 | orchestrator | 2025-05-06 00:57:04.101841 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-05-06 00:57:04.101856 | orchestrator | Tuesday 06 May 2025 00:49:10 +0000 (0:00:00.883) 0:04:40.924 *********** 2025-05-06 00:57:04.101870 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.101883 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.101896 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.101904 | orchestrator | 2025-05-06 00:57:04.101915 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-05-06 00:57:04.101927 | orchestrator | Tuesday 06 May 2025 00:49:11 +0000 (0:00:00.333) 0:04:41.258 *********** 2025-05-06 00:57:04.101941 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.101956 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.101969 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.101980 | orchestrator | 2025-05-06 00:57:04.102003 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-05-06 00:57:04.102045 | orchestrator | Tuesday 06 May 2025 00:49:11 +0000 (0:00:00.309) 0:04:41.567 *********** 2025-05-06 00:57:04.102060 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.102073 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.102088 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.102101 | orchestrator | 2025-05-06 00:57:04.102110 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-05-06 00:57:04.102119 | orchestrator | Tuesday 06 May 2025 00:49:12 +0000 (0:00:01.147) 0:04:42.715 *********** 2025-05-06 00:57:04.102129 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.102144 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.102179 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.102193 | orchestrator | 2025-05-06 00:57:04.102208 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-05-06 00:57:04.102232 | orchestrator | Tuesday 06 May 2025 00:49:13 +0000 (0:00:00.835) 0:04:43.550 *********** 2025-05-06 00:57:04.102245 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.102257 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.102268 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.102281 | orchestrator | 2025-05-06 00:57:04.102293 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-05-06 00:57:04.102301 | orchestrator | Tuesday 06 May 2025 00:49:13 +0000 (0:00:00.622) 0:04:44.173 *********** 2025-05-06 00:57:04.102314 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.102327 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.102340 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.102354 | orchestrator | 2025-05-06 00:57:04.102367 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-05-06 00:57:04.102377 | orchestrator | Tuesday 06 May 2025 00:49:14 +0000 (0:00:00.626) 0:04:44.800 *********** 2025-05-06 00:57:04.102389 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.102401 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.102413 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.102426 | orchestrator | 2025-05-06 00:57:04.102441 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-05-06 00:57:04.102453 | orchestrator | Tuesday 06 May 2025 00:49:14 +0000 (0:00:00.398) 0:04:45.198 *********** 2025-05-06 00:57:04.102465 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.102476 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.102489 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.102514 | orchestrator | 2025-05-06 00:57:04.102526 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-05-06 00:57:04.102539 | orchestrator | Tuesday 06 May 2025 00:49:15 +0000 (0:00:00.288) 0:04:45.487 *********** 2025-05-06 00:57:04.102551 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.102564 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.102576 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.102587 | orchestrator | 2025-05-06 00:57:04.102599 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-05-06 00:57:04.102611 | orchestrator | Tuesday 06 May 2025 00:49:15 +0000 (0:00:00.278) 0:04:45.765 *********** 2025-05-06 00:57:04.102624 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.102635 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.102696 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.102714 | orchestrator | 2025-05-06 00:57:04.102723 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-05-06 00:57:04.102732 | orchestrator | Tuesday 06 May 2025 00:49:15 +0000 (0:00:00.299) 0:04:46.064 *********** 2025-05-06 00:57:04.102741 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.102749 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.102767 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.102777 | orchestrator | 2025-05-06 00:57:04.102787 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-05-06 00:57:04.102797 | orchestrator | Tuesday 06 May 2025 00:49:17 +0000 (0:00:01.503) 0:04:47.567 *********** 2025-05-06 00:57:04.102807 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.102817 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.102828 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.102838 | orchestrator | 2025-05-06 00:57:04.102896 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-05-06 00:57:04.102908 | orchestrator | Tuesday 06 May 2025 00:49:17 +0000 (0:00:00.349) 0:04:47.917 *********** 2025-05-06 00:57:04.102920 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:57:04.102931 | orchestrator | 2025-05-06 00:57:04.102942 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-05-06 00:57:04.102953 | orchestrator | Tuesday 06 May 2025 00:49:18 +0000 (0:00:00.551) 0:04:48.468 *********** 2025-05-06 00:57:04.102964 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.102974 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.102985 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.102995 | orchestrator | 2025-05-06 00:57:04.103007 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-05-06 00:57:04.103017 | orchestrator | Tuesday 06 May 2025 00:49:18 +0000 (0:00:00.562) 0:04:49.031 *********** 2025-05-06 00:57:04.103028 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.103040 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.103048 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.103058 | orchestrator | 2025-05-06 00:57:04.103071 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-05-06 00:57:04.103087 | orchestrator | Tuesday 06 May 2025 00:49:19 +0000 (0:00:00.329) 0:04:49.361 *********** 2025-05-06 00:57:04.103101 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:57:04.103121 | orchestrator | 2025-05-06 00:57:04.103136 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-05-06 00:57:04.103152 | orchestrator | Tuesday 06 May 2025 00:49:19 +0000 (0:00:00.578) 0:04:49.939 *********** 2025-05-06 00:57:04.103170 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.103183 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.103203 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.103220 | orchestrator | 2025-05-06 00:57:04.103233 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-05-06 00:57:04.103259 | orchestrator | Tuesday 06 May 2025 00:49:21 +0000 (0:00:01.526) 0:04:51.465 *********** 2025-05-06 00:57:04.103277 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.103288 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.103305 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.103321 | orchestrator | 2025-05-06 00:57:04.103340 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-05-06 00:57:04.103354 | orchestrator | Tuesday 06 May 2025 00:49:22 +0000 (0:00:01.088) 0:04:52.554 *********** 2025-05-06 00:57:04.103371 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.103385 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.103398 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.103414 | orchestrator | 2025-05-06 00:57:04.103428 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-05-06 00:57:04.103441 | orchestrator | Tuesday 06 May 2025 00:49:24 +0000 (0:00:01.734) 0:04:54.288 *********** 2025-05-06 00:57:04.103457 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.103471 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.103483 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.103500 | orchestrator | 2025-05-06 00:57:04.103509 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-05-06 00:57:04.103524 | orchestrator | Tuesday 06 May 2025 00:49:25 +0000 (0:00:01.895) 0:04:56.183 *********** 2025-05-06 00:57:04.103539 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:57:04.103550 | orchestrator | 2025-05-06 00:57:04.103563 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-05-06 00:57:04.103579 | orchestrator | Tuesday 06 May 2025 00:49:26 +0000 (0:00:00.595) 0:04:56.779 *********** 2025-05-06 00:57:04.103591 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-06 00:57:04.103603 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.103616 | orchestrator | 2025-05-06 00:57:04.103631 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-05-06 00:57:04.103643 | orchestrator | Tuesday 06 May 2025 00:49:48 +0000 (0:00:21.486) 0:05:18.265 *********** 2025-05-06 00:57:04.103674 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.103689 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.103703 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.103715 | orchestrator | 2025-05-06 00:57:04.103726 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-05-06 00:57:04.103741 | orchestrator | Tuesday 06 May 2025 00:49:55 +0000 (0:00:07.955) 0:05:26.220 *********** 2025-05-06 00:57:04.103755 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.103767 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.103775 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.103787 | orchestrator | 2025-05-06 00:57:04.103799 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-06 00:57:04.103812 | orchestrator | Tuesday 06 May 2025 00:49:57 +0000 (0:00:01.124) 0:05:27.345 *********** 2025-05-06 00:57:04.103824 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.103837 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.103848 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.103862 | orchestrator | 2025-05-06 00:57:04.103873 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-06 00:57:04.103885 | orchestrator | Tuesday 06 May 2025 00:49:57 +0000 (0:00:00.648) 0:05:27.994 *********** 2025-05-06 00:57:04.103906 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:57:04.103918 | orchestrator | 2025-05-06 00:57:04.103930 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-06 00:57:04.103943 | orchestrator | Tuesday 06 May 2025 00:49:58 +0000 (0:00:00.771) 0:05:28.766 *********** 2025-05-06 00:57:04.103954 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.104016 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.104031 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.104043 | orchestrator | 2025-05-06 00:57:04.104054 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-06 00:57:04.104067 | orchestrator | Tuesday 06 May 2025 00:49:58 +0000 (0:00:00.338) 0:05:29.104 *********** 2025-05-06 00:57:04.104079 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.104091 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.104103 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.104115 | orchestrator | 2025-05-06 00:57:04.104127 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-06 00:57:04.104139 | orchestrator | Tuesday 06 May 2025 00:50:00 +0000 (0:00:01.238) 0:05:30.342 *********** 2025-05-06 00:57:04.104151 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-06 00:57:04.104164 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-06 00:57:04.104176 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-06 00:57:04.104189 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.104202 | orchestrator | 2025-05-06 00:57:04.104214 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-06 00:57:04.104226 | orchestrator | Tuesday 06 May 2025 00:50:01 +0000 (0:00:00.966) 0:05:31.309 *********** 2025-05-06 00:57:04.104239 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.104251 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.104259 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.104268 | orchestrator | 2025-05-06 00:57:04.104276 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-06 00:57:04.104285 | orchestrator | Tuesday 06 May 2025 00:50:01 +0000 (0:00:00.349) 0:05:31.658 *********** 2025-05-06 00:57:04.104294 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.104303 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.104312 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.104322 | orchestrator | 2025-05-06 00:57:04.104332 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-06 00:57:04.104341 | orchestrator | 2025-05-06 00:57:04.104352 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-06 00:57:04.104363 | orchestrator | Tuesday 06 May 2025 00:50:03 +0000 (0:00:02.096) 0:05:33.755 *********** 2025-05-06 00:57:04.104374 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:57:04.104386 | orchestrator | 2025-05-06 00:57:04.104397 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-06 00:57:04.104407 | orchestrator | Tuesday 06 May 2025 00:50:04 +0000 (0:00:00.766) 0:05:34.521 *********** 2025-05-06 00:57:04.104417 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.104429 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.104440 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.104450 | orchestrator | 2025-05-06 00:57:04.104461 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-06 00:57:04.104480 | orchestrator | Tuesday 06 May 2025 00:50:05 +0000 (0:00:00.756) 0:05:35.277 *********** 2025-05-06 00:57:04.104492 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.104502 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.104517 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.104527 | orchestrator | 2025-05-06 00:57:04.104543 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-06 00:57:04.104554 | orchestrator | Tuesday 06 May 2025 00:50:05 +0000 (0:00:00.379) 0:05:35.657 *********** 2025-05-06 00:57:04.104573 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.104589 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.104603 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.104622 | orchestrator | 2025-05-06 00:57:04.104637 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-06 00:57:04.104670 | orchestrator | Tuesday 06 May 2025 00:50:06 +0000 (0:00:00.578) 0:05:36.235 *********** 2025-05-06 00:57:04.104685 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.104693 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.104702 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.104713 | orchestrator | 2025-05-06 00:57:04.104731 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-06 00:57:04.104744 | orchestrator | Tuesday 06 May 2025 00:50:06 +0000 (0:00:00.330) 0:05:36.566 *********** 2025-05-06 00:57:04.104763 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.104778 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.104790 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.104809 | orchestrator | 2025-05-06 00:57:04.104823 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-06 00:57:04.104836 | orchestrator | Tuesday 06 May 2025 00:50:07 +0000 (0:00:00.732) 0:05:37.299 *********** 2025-05-06 00:57:04.104853 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.104867 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.104880 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.104895 | orchestrator | 2025-05-06 00:57:04.104907 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-06 00:57:04.104922 | orchestrator | Tuesday 06 May 2025 00:50:07 +0000 (0:00:00.333) 0:05:37.632 *********** 2025-05-06 00:57:04.104938 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.104950 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.104964 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.104980 | orchestrator | 2025-05-06 00:57:04.104993 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-06 00:57:04.105004 | orchestrator | Tuesday 06 May 2025 00:50:08 +0000 (0:00:00.614) 0:05:38.246 *********** 2025-05-06 00:57:04.105019 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.105028 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.105042 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.105056 | orchestrator | 2025-05-06 00:57:04.105067 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-06 00:57:04.105079 | orchestrator | Tuesday 06 May 2025 00:50:08 +0000 (0:00:00.422) 0:05:38.669 *********** 2025-05-06 00:57:04.105094 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.105107 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.105117 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.105131 | orchestrator | 2025-05-06 00:57:04.105186 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-06 00:57:04.105202 | orchestrator | Tuesday 06 May 2025 00:50:08 +0000 (0:00:00.340) 0:05:39.009 *********** 2025-05-06 00:57:04.105213 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.105225 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.105239 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.105252 | orchestrator | 2025-05-06 00:57:04.105262 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-06 00:57:04.105271 | orchestrator | Tuesday 06 May 2025 00:50:09 +0000 (0:00:00.318) 0:05:39.328 *********** 2025-05-06 00:57:04.105282 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.105294 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.105309 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.105321 | orchestrator | 2025-05-06 00:57:04.105332 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-06 00:57:04.105343 | orchestrator | Tuesday 06 May 2025 00:50:10 +0000 (0:00:00.983) 0:05:40.311 *********** 2025-05-06 00:57:04.105354 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.105366 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.105377 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.105388 | orchestrator | 2025-05-06 00:57:04.105399 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-06 00:57:04.105410 | orchestrator | Tuesday 06 May 2025 00:50:10 +0000 (0:00:00.509) 0:05:40.820 *********** 2025-05-06 00:57:04.105432 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.105443 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.105455 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.105467 | orchestrator | 2025-05-06 00:57:04.105479 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-06 00:57:04.105491 | orchestrator | Tuesday 06 May 2025 00:50:10 +0000 (0:00:00.338) 0:05:41.159 *********** 2025-05-06 00:57:04.105503 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.105514 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.105525 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.105535 | orchestrator | 2025-05-06 00:57:04.105547 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-06 00:57:04.105558 | orchestrator | Tuesday 06 May 2025 00:50:11 +0000 (0:00:00.316) 0:05:41.476 *********** 2025-05-06 00:57:04.105569 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.105581 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.105592 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.105603 | orchestrator | 2025-05-06 00:57:04.105615 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-06 00:57:04.105624 | orchestrator | Tuesday 06 May 2025 00:50:11 +0000 (0:00:00.618) 0:05:42.094 *********** 2025-05-06 00:57:04.105636 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.105701 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.105716 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.105725 | orchestrator | 2025-05-06 00:57:04.105733 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-06 00:57:04.105742 | orchestrator | Tuesday 06 May 2025 00:50:12 +0000 (0:00:00.367) 0:05:42.461 *********** 2025-05-06 00:57:04.105750 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.105759 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.105768 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.105778 | orchestrator | 2025-05-06 00:57:04.105795 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-06 00:57:04.105805 | orchestrator | Tuesday 06 May 2025 00:50:12 +0000 (0:00:00.310) 0:05:42.772 *********** 2025-05-06 00:57:04.105815 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.105826 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.105836 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.105847 | orchestrator | 2025-05-06 00:57:04.105858 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-06 00:57:04.105868 | orchestrator | Tuesday 06 May 2025 00:50:12 +0000 (0:00:00.298) 0:05:43.071 *********** 2025-05-06 00:57:04.105878 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.105889 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.105906 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.105916 | orchestrator | 2025-05-06 00:57:04.105926 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-06 00:57:04.105936 | orchestrator | Tuesday 06 May 2025 00:50:13 +0000 (0:00:00.576) 0:05:43.648 *********** 2025-05-06 00:57:04.105945 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.105957 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.105968 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.105979 | orchestrator | 2025-05-06 00:57:04.105990 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-06 00:57:04.106000 | orchestrator | Tuesday 06 May 2025 00:50:13 +0000 (0:00:00.339) 0:05:43.987 *********** 2025-05-06 00:57:04.106011 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.106059 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.106072 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.106091 | orchestrator | 2025-05-06 00:57:04.106104 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-06 00:57:04.106112 | orchestrator | Tuesday 06 May 2025 00:50:14 +0000 (0:00:00.327) 0:05:44.314 *********** 2025-05-06 00:57:04.106121 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.106144 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.106163 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.106179 | orchestrator | 2025-05-06 00:57:04.106195 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-06 00:57:04.106215 | orchestrator | Tuesday 06 May 2025 00:50:14 +0000 (0:00:00.559) 0:05:44.874 *********** 2025-05-06 00:57:04.106232 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.106247 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.106264 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.106283 | orchestrator | 2025-05-06 00:57:04.106299 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-06 00:57:04.106314 | orchestrator | Tuesday 06 May 2025 00:50:14 +0000 (0:00:00.321) 0:05:45.195 *********** 2025-05-06 00:57:04.106331 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.106349 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.106365 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.106380 | orchestrator | 2025-05-06 00:57:04.106439 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-06 00:57:04.106457 | orchestrator | Tuesday 06 May 2025 00:50:15 +0000 (0:00:00.368) 0:05:45.563 *********** 2025-05-06 00:57:04.106469 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.106482 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.106494 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.106505 | orchestrator | 2025-05-06 00:57:04.106521 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-06 00:57:04.106535 | orchestrator | Tuesday 06 May 2025 00:50:15 +0000 (0:00:00.334) 0:05:45.898 *********** 2025-05-06 00:57:04.106547 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.106559 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.106572 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.106582 | orchestrator | 2025-05-06 00:57:04.106595 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-06 00:57:04.106610 | orchestrator | Tuesday 06 May 2025 00:50:16 +0000 (0:00:00.526) 0:05:46.424 *********** 2025-05-06 00:57:04.106624 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.106635 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.106668 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.106677 | orchestrator | 2025-05-06 00:57:04.106684 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-06 00:57:04.106694 | orchestrator | Tuesday 06 May 2025 00:50:16 +0000 (0:00:00.343) 0:05:46.768 *********** 2025-05-06 00:57:04.106708 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.106719 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.106732 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.106747 | orchestrator | 2025-05-06 00:57:04.106760 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-06 00:57:04.106771 | orchestrator | Tuesday 06 May 2025 00:50:16 +0000 (0:00:00.336) 0:05:47.104 *********** 2025-05-06 00:57:04.106780 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.106791 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.106804 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.106816 | orchestrator | 2025-05-06 00:57:04.106828 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-06 00:57:04.106841 | orchestrator | Tuesday 06 May 2025 00:50:17 +0000 (0:00:00.365) 0:05:47.470 *********** 2025-05-06 00:57:04.106853 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.106865 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.106878 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.106890 | orchestrator | 2025-05-06 00:57:04.106901 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-06 00:57:04.106927 | orchestrator | Tuesday 06 May 2025 00:50:17 +0000 (0:00:00.641) 0:05:48.112 *********** 2025-05-06 00:57:04.106939 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.106963 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.106974 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.106987 | orchestrator | 2025-05-06 00:57:04.106999 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-06 00:57:04.107012 | orchestrator | Tuesday 06 May 2025 00:50:18 +0000 (0:00:00.366) 0:05:48.478 *********** 2025-05-06 00:57:04.107023 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.107036 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.107049 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.107060 | orchestrator | 2025-05-06 00:57:04.107073 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-06 00:57:04.107087 | orchestrator | Tuesday 06 May 2025 00:50:18 +0000 (0:00:00.370) 0:05:48.849 *********** 2025-05-06 00:57:04.107099 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-06 00:57:04.107111 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-06 00:57:04.107123 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.107136 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-06 00:57:04.107148 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-06 00:57:04.107161 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.107174 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-06 00:57:04.107185 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-06 00:57:04.107198 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.107209 | orchestrator | 2025-05-06 00:57:04.107222 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-06 00:57:04.107234 | orchestrator | Tuesday 06 May 2025 00:50:18 +0000 (0:00:00.363) 0:05:49.212 *********** 2025-05-06 00:57:04.107245 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-06 00:57:04.107258 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-06 00:57:04.107267 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.107275 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-06 00:57:04.107283 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-06 00:57:04.107292 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.107301 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-06 00:57:04.107309 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-06 00:57:04.107319 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.107328 | orchestrator | 2025-05-06 00:57:04.107339 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-06 00:57:04.107349 | orchestrator | Tuesday 06 May 2025 00:50:19 +0000 (0:00:00.653) 0:05:49.866 *********** 2025-05-06 00:57:04.107359 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.107370 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.107380 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.107391 | orchestrator | 2025-05-06 00:57:04.107409 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-06 00:57:04.107419 | orchestrator | Tuesday 06 May 2025 00:50:19 +0000 (0:00:00.254) 0:05:50.121 *********** 2025-05-06 00:57:04.107431 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.107441 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.107496 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.107507 | orchestrator | 2025-05-06 00:57:04.107517 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-06 00:57:04.107528 | orchestrator | Tuesday 06 May 2025 00:50:20 +0000 (0:00:00.286) 0:05:50.407 *********** 2025-05-06 00:57:04.107542 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.107552 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.107569 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.107582 | orchestrator | 2025-05-06 00:57:04.107601 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-06 00:57:04.107627 | orchestrator | Tuesday 06 May 2025 00:50:20 +0000 (0:00:00.260) 0:05:50.668 *********** 2025-05-06 00:57:04.107639 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.107706 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.107726 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.107741 | orchestrator | 2025-05-06 00:57:04.107754 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-06 00:57:04.107771 | orchestrator | Tuesday 06 May 2025 00:50:20 +0000 (0:00:00.439) 0:05:51.107 *********** 2025-05-06 00:57:04.107785 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.107799 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.107816 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.107830 | orchestrator | 2025-05-06 00:57:04.107843 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-06 00:57:04.107861 | orchestrator | Tuesday 06 May 2025 00:50:21 +0000 (0:00:00.249) 0:05:51.357 *********** 2025-05-06 00:57:04.107875 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.107887 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.107904 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.107920 | orchestrator | 2025-05-06 00:57:04.107931 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-06 00:57:04.107946 | orchestrator | Tuesday 06 May 2025 00:50:21 +0000 (0:00:00.320) 0:05:51.677 *********** 2025-05-06 00:57:04.107961 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-06 00:57:04.107971 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-06 00:57:04.107985 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-06 00:57:04.107994 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.108008 | orchestrator | 2025-05-06 00:57:04.108022 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-06 00:57:04.108037 | orchestrator | Tuesday 06 May 2025 00:50:21 +0000 (0:00:00.371) 0:05:52.048 *********** 2025-05-06 00:57:04.108047 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-06 00:57:04.108062 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-06 00:57:04.108077 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-06 00:57:04.108090 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.108102 | orchestrator | 2025-05-06 00:57:04.108116 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-06 00:57:04.108129 | orchestrator | Tuesday 06 May 2025 00:50:22 +0000 (0:00:00.380) 0:05:52.428 *********** 2025-05-06 00:57:04.108143 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-06 00:57:04.108154 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-06 00:57:04.108169 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-06 00:57:04.108184 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.108194 | orchestrator | 2025-05-06 00:57:04.108207 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-06 00:57:04.108221 | orchestrator | Tuesday 06 May 2025 00:50:22 +0000 (0:00:00.476) 0:05:52.905 *********** 2025-05-06 00:57:04.108233 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.108241 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.108254 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.108267 | orchestrator | 2025-05-06 00:57:04.108280 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-06 00:57:04.108293 | orchestrator | Tuesday 06 May 2025 00:50:23 +0000 (0:00:00.420) 0:05:53.326 *********** 2025-05-06 00:57:04.108313 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-06 00:57:04.108325 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.108337 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-06 00:57:04.108348 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.108361 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-06 00:57:04.108384 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.108397 | orchestrator | 2025-05-06 00:57:04.108410 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-06 00:57:04.108421 | orchestrator | Tuesday 06 May 2025 00:50:23 +0000 (0:00:00.377) 0:05:53.703 *********** 2025-05-06 00:57:04.108432 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.108444 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.108456 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.108467 | orchestrator | 2025-05-06 00:57:04.108479 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-06 00:57:04.108491 | orchestrator | Tuesday 06 May 2025 00:50:23 +0000 (0:00:00.312) 0:05:54.016 *********** 2025-05-06 00:57:04.108503 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.108515 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.108528 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.108540 | orchestrator | 2025-05-06 00:57:04.108551 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-06 00:57:04.108564 | orchestrator | Tuesday 06 May 2025 00:50:24 +0000 (0:00:00.275) 0:05:54.291 *********** 2025-05-06 00:57:04.108576 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-06 00:57:04.108588 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.108601 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-06 00:57:04.108612 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.108625 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-06 00:57:04.108637 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.108668 | orchestrator | 2025-05-06 00:57:04.108728 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-06 00:57:04.108738 | orchestrator | Tuesday 06 May 2025 00:50:24 +0000 (0:00:00.568) 0:05:54.859 *********** 2025-05-06 00:57:04.108747 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.108755 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.108763 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.108772 | orchestrator | 2025-05-06 00:57:04.108781 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-06 00:57:04.108791 | orchestrator | Tuesday 06 May 2025 00:50:24 +0000 (0:00:00.308) 0:05:55.168 *********** 2025-05-06 00:57:04.108802 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-06 00:57:04.108811 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-06 00:57:04.108821 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-06 00:57:04.108832 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-06 00:57:04.108843 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-06 00:57:04.108854 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-06 00:57:04.108864 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.108876 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.108887 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-06 00:57:04.108898 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-06 00:57:04.108909 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-06 00:57:04.108921 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.108932 | orchestrator | 2025-05-06 00:57:04.108943 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-06 00:57:04.108953 | orchestrator | Tuesday 06 May 2025 00:50:25 +0000 (0:00:00.522) 0:05:55.691 *********** 2025-05-06 00:57:04.108963 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.108974 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.108985 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.108995 | orchestrator | 2025-05-06 00:57:04.109006 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-06 00:57:04.109025 | orchestrator | Tuesday 06 May 2025 00:50:26 +0000 (0:00:00.626) 0:05:56.318 *********** 2025-05-06 00:57:04.109046 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.109059 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.109077 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.109090 | orchestrator | 2025-05-06 00:57:04.109109 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-06 00:57:04.109125 | orchestrator | Tuesday 06 May 2025 00:50:26 +0000 (0:00:00.479) 0:05:56.797 *********** 2025-05-06 00:57:04.109141 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.109159 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.109172 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.109190 | orchestrator | 2025-05-06 00:57:04.109206 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-06 00:57:04.109219 | orchestrator | Tuesday 06 May 2025 00:50:27 +0000 (0:00:00.697) 0:05:57.495 *********** 2025-05-06 00:57:04.109238 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.109251 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.109266 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.109284 | orchestrator | 2025-05-06 00:57:04.109296 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-05-06 00:57:04.109313 | orchestrator | Tuesday 06 May 2025 00:50:27 +0000 (0:00:00.557) 0:05:58.053 *********** 2025-05-06 00:57:04.109329 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-06 00:57:04.109342 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-06 00:57:04.109360 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-06 00:57:04.109375 | orchestrator | 2025-05-06 00:57:04.109387 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-05-06 00:57:04.109404 | orchestrator | Tuesday 06 May 2025 00:50:28 +0000 (0:00:01.021) 0:05:59.075 *********** 2025-05-06 00:57:04.109419 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:57:04.109432 | orchestrator | 2025-05-06 00:57:04.109449 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-05-06 00:57:04.109463 | orchestrator | Tuesday 06 May 2025 00:50:29 +0000 (0:00:00.535) 0:05:59.611 *********** 2025-05-06 00:57:04.109474 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.109491 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.109501 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.109522 | orchestrator | 2025-05-06 00:57:04.109537 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-05-06 00:57:04.109552 | orchestrator | Tuesday 06 May 2025 00:50:30 +0000 (0:00:00.665) 0:06:00.276 *********** 2025-05-06 00:57:04.109566 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.109578 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.109592 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.109606 | orchestrator | 2025-05-06 00:57:04.109617 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-05-06 00:57:04.109631 | orchestrator | Tuesday 06 May 2025 00:50:30 +0000 (0:00:00.572) 0:06:00.848 *********** 2025-05-06 00:57:04.109689 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-06 00:57:04.109708 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-06 00:57:04.109721 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-06 00:57:04.109733 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-06 00:57:04.109748 | orchestrator | 2025-05-06 00:57:04.109762 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-05-06 00:57:04.109773 | orchestrator | Tuesday 06 May 2025 00:50:38 +0000 (0:00:07.947) 0:06:08.795 *********** 2025-05-06 00:57:04.109785 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.109795 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.109805 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.109817 | orchestrator | 2025-05-06 00:57:04.109873 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-05-06 00:57:04.109898 | orchestrator | Tuesday 06 May 2025 00:50:39 +0000 (0:00:00.619) 0:06:09.414 *********** 2025-05-06 00:57:04.109910 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-06 00:57:04.109922 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-06 00:57:04.109934 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-06 00:57:04.109946 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-06 00:57:04.109959 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:57:04.109971 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:57:04.109984 | orchestrator | 2025-05-06 00:57:04.109995 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-05-06 00:57:04.110007 | orchestrator | Tuesday 06 May 2025 00:50:40 +0000 (0:00:01.765) 0:06:11.179 *********** 2025-05-06 00:57:04.110045 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-06 00:57:04.110059 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-06 00:57:04.110070 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-06 00:57:04.110083 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-06 00:57:04.110095 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-06 00:57:04.110104 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-06 00:57:04.110112 | orchestrator | 2025-05-06 00:57:04.110120 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-05-06 00:57:04.110128 | orchestrator | Tuesday 06 May 2025 00:50:42 +0000 (0:00:01.256) 0:06:12.436 *********** 2025-05-06 00:57:04.110135 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.110143 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.110154 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.110169 | orchestrator | 2025-05-06 00:57:04.110183 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-05-06 00:57:04.110197 | orchestrator | Tuesday 06 May 2025 00:50:42 +0000 (0:00:00.675) 0:06:13.111 *********** 2025-05-06 00:57:04.110210 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.110222 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.110234 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.110246 | orchestrator | 2025-05-06 00:57:04.110258 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-05-06 00:57:04.110271 | orchestrator | Tuesday 06 May 2025 00:50:43 +0000 (0:00:00.537) 0:06:13.648 *********** 2025-05-06 00:57:04.110282 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.110294 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.110305 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.110317 | orchestrator | 2025-05-06 00:57:04.110329 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-05-06 00:57:04.110343 | orchestrator | Tuesday 06 May 2025 00:50:43 +0000 (0:00:00.365) 0:06:14.014 *********** 2025-05-06 00:57:04.110351 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:57:04.110359 | orchestrator | 2025-05-06 00:57:04.110367 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-05-06 00:57:04.110376 | orchestrator | Tuesday 06 May 2025 00:50:44 +0000 (0:00:00.766) 0:06:14.781 *********** 2025-05-06 00:57:04.110384 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.110394 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.110403 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.110413 | orchestrator | 2025-05-06 00:57:04.110423 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-05-06 00:57:04.110433 | orchestrator | Tuesday 06 May 2025 00:50:44 +0000 (0:00:00.334) 0:06:15.115 *********** 2025-05-06 00:57:04.110444 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.110455 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.110466 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.110489 | orchestrator | 2025-05-06 00:57:04.110500 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-05-06 00:57:04.110512 | orchestrator | Tuesday 06 May 2025 00:50:45 +0000 (0:00:00.328) 0:06:15.443 *********** 2025-05-06 00:57:04.110524 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:57:04.110535 | orchestrator | 2025-05-06 00:57:04.110545 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-05-06 00:57:04.110556 | orchestrator | Tuesday 06 May 2025 00:50:45 +0000 (0:00:00.769) 0:06:16.212 *********** 2025-05-06 00:57:04.110566 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.110576 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.110587 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.110599 | orchestrator | 2025-05-06 00:57:04.110609 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-05-06 00:57:04.110619 | orchestrator | Tuesday 06 May 2025 00:50:47 +0000 (0:00:01.169) 0:06:17.381 *********** 2025-05-06 00:57:04.110630 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.110641 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.110668 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.110684 | orchestrator | 2025-05-06 00:57:04.110694 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-05-06 00:57:04.110713 | orchestrator | Tuesday 06 May 2025 00:50:48 +0000 (0:00:01.156) 0:06:18.538 *********** 2025-05-06 00:57:04.110731 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.110743 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.110762 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.110777 | orchestrator | 2025-05-06 00:57:04.110793 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-05-06 00:57:04.110811 | orchestrator | Tuesday 06 May 2025 00:50:50 +0000 (0:00:02.029) 0:06:20.567 *********** 2025-05-06 00:57:04.110823 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.110841 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.110857 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.110870 | orchestrator | 2025-05-06 00:57:04.110887 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-05-06 00:57:04.110939 | orchestrator | Tuesday 06 May 2025 00:50:52 +0000 (0:00:01.799) 0:06:22.367 *********** 2025-05-06 00:57:04.110956 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.110968 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.110983 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-06 00:57:04.111000 | orchestrator | 2025-05-06 00:57:04.111012 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-05-06 00:57:04.111027 | orchestrator | Tuesday 06 May 2025 00:50:52 +0000 (0:00:00.592) 0:06:22.959 *********** 2025-05-06 00:57:04.111042 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-05-06 00:57:04.111055 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-05-06 00:57:04.111069 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-06 00:57:04.111085 | orchestrator | 2025-05-06 00:57:04.111099 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-05-06 00:57:04.111111 | orchestrator | Tuesday 06 May 2025 00:51:06 +0000 (0:00:13.542) 0:06:36.502 *********** 2025-05-06 00:57:04.111124 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-06 00:57:04.111135 | orchestrator | 2025-05-06 00:57:04.111150 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-06 00:57:04.111164 | orchestrator | Tuesday 06 May 2025 00:51:08 +0000 (0:00:01.854) 0:06:38.357 *********** 2025-05-06 00:57:04.111175 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.111189 | orchestrator | 2025-05-06 00:57:04.111204 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-05-06 00:57:04.111226 | orchestrator | Tuesday 06 May 2025 00:51:08 +0000 (0:00:00.524) 0:06:38.881 *********** 2025-05-06 00:57:04.111242 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.111256 | orchestrator | 2025-05-06 00:57:04.111267 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-05-06 00:57:04.111281 | orchestrator | Tuesday 06 May 2025 00:51:08 +0000 (0:00:00.293) 0:06:39.175 *********** 2025-05-06 00:57:04.111296 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-06 00:57:04.111311 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-06 00:57:04.111322 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-06 00:57:04.111334 | orchestrator | 2025-05-06 00:57:04.111353 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-05-06 00:57:04.111363 | orchestrator | Tuesday 06 May 2025 00:51:15 +0000 (0:00:06.477) 0:06:45.652 *********** 2025-05-06 00:57:04.111374 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-06 00:57:04.111386 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-06 00:57:04.111399 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-06 00:57:04.111414 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-06 00:57:04.111426 | orchestrator | 2025-05-06 00:57:04.111436 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-06 00:57:04.111449 | orchestrator | Tuesday 06 May 2025 00:51:20 +0000 (0:00:04.884) 0:06:50.537 *********** 2025-05-06 00:57:04.111461 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.111474 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.111486 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.111497 | orchestrator | 2025-05-06 00:57:04.111510 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-06 00:57:04.111521 | orchestrator | Tuesday 06 May 2025 00:51:20 +0000 (0:00:00.682) 0:06:51.220 *********** 2025-05-06 00:57:04.111534 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:57:04.111547 | orchestrator | 2025-05-06 00:57:04.111559 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-06 00:57:04.111570 | orchestrator | Tuesday 06 May 2025 00:51:21 +0000 (0:00:00.646) 0:06:51.867 *********** 2025-05-06 00:57:04.111583 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.111595 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.111607 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.111619 | orchestrator | 2025-05-06 00:57:04.111631 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-06 00:57:04.111643 | orchestrator | Tuesday 06 May 2025 00:51:21 +0000 (0:00:00.279) 0:06:52.146 *********** 2025-05-06 00:57:04.111679 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.111692 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.111706 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.111720 | orchestrator | 2025-05-06 00:57:04.111733 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-06 00:57:04.111746 | orchestrator | Tuesday 06 May 2025 00:51:23 +0000 (0:00:01.241) 0:06:53.388 *********** 2025-05-06 00:57:04.111757 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-06 00:57:04.111771 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-06 00:57:04.111782 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-06 00:57:04.111795 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.111808 | orchestrator | 2025-05-06 00:57:04.111820 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-06 00:57:04.111831 | orchestrator | Tuesday 06 May 2025 00:51:23 +0000 (0:00:00.597) 0:06:53.985 *********** 2025-05-06 00:57:04.111845 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.111872 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.111886 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.111895 | orchestrator | 2025-05-06 00:57:04.111903 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-06 00:57:04.111912 | orchestrator | Tuesday 06 May 2025 00:51:24 +0000 (0:00:00.295) 0:06:54.281 *********** 2025-05-06 00:57:04.111957 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.111972 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.111983 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.111993 | orchestrator | 2025-05-06 00:57:04.112004 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-06 00:57:04.112016 | orchestrator | 2025-05-06 00:57:04.112027 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-06 00:57:04.112039 | orchestrator | Tuesday 06 May 2025 00:51:26 +0000 (0:00:01.989) 0:06:56.270 *********** 2025-05-06 00:57:04.112050 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.112062 | orchestrator | 2025-05-06 00:57:04.112072 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-06 00:57:04.112083 | orchestrator | Tuesday 06 May 2025 00:51:26 +0000 (0:00:00.784) 0:06:57.054 *********** 2025-05-06 00:57:04.112093 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.112104 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.112114 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.112124 | orchestrator | 2025-05-06 00:57:04.112135 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-06 00:57:04.112145 | orchestrator | Tuesday 06 May 2025 00:51:27 +0000 (0:00:00.281) 0:06:57.336 *********** 2025-05-06 00:57:04.112159 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.112173 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.112189 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.112207 | orchestrator | 2025-05-06 00:57:04.112220 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-06 00:57:04.112239 | orchestrator | Tuesday 06 May 2025 00:51:27 +0000 (0:00:00.687) 0:06:58.024 *********** 2025-05-06 00:57:04.112255 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.112269 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.112286 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.112300 | orchestrator | 2025-05-06 00:57:04.112318 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-06 00:57:04.112333 | orchestrator | Tuesday 06 May 2025 00:51:28 +0000 (0:00:00.930) 0:06:58.954 *********** 2025-05-06 00:57:04.112346 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.112363 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.112376 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.112393 | orchestrator | 2025-05-06 00:57:04.112409 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-06 00:57:04.112421 | orchestrator | Tuesday 06 May 2025 00:51:29 +0000 (0:00:00.688) 0:06:59.643 *********** 2025-05-06 00:57:04.112437 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.112452 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.112465 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.112481 | orchestrator | 2025-05-06 00:57:04.112495 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-06 00:57:04.112517 | orchestrator | Tuesday 06 May 2025 00:51:29 +0000 (0:00:00.301) 0:06:59.944 *********** 2025-05-06 00:57:04.112531 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.112547 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.112562 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.112572 | orchestrator | 2025-05-06 00:57:04.112586 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-06 00:57:04.112596 | orchestrator | Tuesday 06 May 2025 00:51:30 +0000 (0:00:00.603) 0:07:00.548 *********** 2025-05-06 00:57:04.112608 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.112622 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.112642 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.112674 | orchestrator | 2025-05-06 00:57:04.112688 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-06 00:57:04.112701 | orchestrator | Tuesday 06 May 2025 00:51:30 +0000 (0:00:00.343) 0:07:00.891 *********** 2025-05-06 00:57:04.112712 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.112726 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.112741 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.112753 | orchestrator | 2025-05-06 00:57:04.112764 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-06 00:57:04.112778 | orchestrator | Tuesday 06 May 2025 00:51:30 +0000 (0:00:00.294) 0:07:01.186 *********** 2025-05-06 00:57:04.112791 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.112804 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.112815 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.112828 | orchestrator | 2025-05-06 00:57:04.112838 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-06 00:57:04.112848 | orchestrator | Tuesday 06 May 2025 00:51:31 +0000 (0:00:00.298) 0:07:01.484 *********** 2025-05-06 00:57:04.112859 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.112871 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.112884 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.112896 | orchestrator | 2025-05-06 00:57:04.112907 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-06 00:57:04.112920 | orchestrator | Tuesday 06 May 2025 00:51:31 +0000 (0:00:00.554) 0:07:02.038 *********** 2025-05-06 00:57:04.112932 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.112943 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.112955 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.112966 | orchestrator | 2025-05-06 00:57:04.112978 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-06 00:57:04.112992 | orchestrator | Tuesday 06 May 2025 00:51:32 +0000 (0:00:00.671) 0:07:02.710 *********** 2025-05-06 00:57:04.113004 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.113015 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.113027 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.113039 | orchestrator | 2025-05-06 00:57:04.113052 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-06 00:57:04.113063 | orchestrator | Tuesday 06 May 2025 00:51:32 +0000 (0:00:00.268) 0:07:02.979 *********** 2025-05-06 00:57:04.113076 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.113087 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.113100 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.113112 | orchestrator | 2025-05-06 00:57:04.113166 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-06 00:57:04.113179 | orchestrator | Tuesday 06 May 2025 00:51:32 +0000 (0:00:00.247) 0:07:03.226 *********** 2025-05-06 00:57:04.113191 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.113203 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.113215 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.113226 | orchestrator | 2025-05-06 00:57:04.113237 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-06 00:57:04.113248 | orchestrator | Tuesday 06 May 2025 00:51:33 +0000 (0:00:00.402) 0:07:03.628 *********** 2025-05-06 00:57:04.113260 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.113272 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.113284 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.113296 | orchestrator | 2025-05-06 00:57:04.113307 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-06 00:57:04.113318 | orchestrator | Tuesday 06 May 2025 00:51:33 +0000 (0:00:00.276) 0:07:03.905 *********** 2025-05-06 00:57:04.113327 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.113335 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.113351 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.113368 | orchestrator | 2025-05-06 00:57:04.113379 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-06 00:57:04.113389 | orchestrator | Tuesday 06 May 2025 00:51:33 +0000 (0:00:00.263) 0:07:04.168 *********** 2025-05-06 00:57:04.113399 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.113409 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.113419 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.113430 | orchestrator | 2025-05-06 00:57:04.113441 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-06 00:57:04.113452 | orchestrator | Tuesday 06 May 2025 00:51:34 +0000 (0:00:00.253) 0:07:04.422 *********** 2025-05-06 00:57:04.113462 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.113473 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.113483 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.113495 | orchestrator | 2025-05-06 00:57:04.113505 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-06 00:57:04.113516 | orchestrator | Tuesday 06 May 2025 00:51:34 +0000 (0:00:00.423) 0:07:04.845 *********** 2025-05-06 00:57:04.113527 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.113538 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.113548 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.113558 | orchestrator | 2025-05-06 00:57:04.113569 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-06 00:57:04.113579 | orchestrator | Tuesday 06 May 2025 00:51:34 +0000 (0:00:00.261) 0:07:05.107 *********** 2025-05-06 00:57:04.113590 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.113603 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.113620 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.113632 | orchestrator | 2025-05-06 00:57:04.113694 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-06 00:57:04.113712 | orchestrator | Tuesday 06 May 2025 00:51:35 +0000 (0:00:00.274) 0:07:05.381 *********** 2025-05-06 00:57:04.113725 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.113743 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.113756 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.113772 | orchestrator | 2025-05-06 00:57:04.113789 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-06 00:57:04.113808 | orchestrator | Tuesday 06 May 2025 00:51:35 +0000 (0:00:00.290) 0:07:05.671 *********** 2025-05-06 00:57:04.113820 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.113836 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.113849 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.113861 | orchestrator | 2025-05-06 00:57:04.113877 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-06 00:57:04.113890 | orchestrator | Tuesday 06 May 2025 00:51:35 +0000 (0:00:00.450) 0:07:06.122 *********** 2025-05-06 00:57:04.113902 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.113918 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.113933 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.113945 | orchestrator | 2025-05-06 00:57:04.113960 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-06 00:57:04.113975 | orchestrator | Tuesday 06 May 2025 00:51:36 +0000 (0:00:00.271) 0:07:06.393 *********** 2025-05-06 00:57:04.113987 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.114000 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.114051 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.114068 | orchestrator | 2025-05-06 00:57:04.114084 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-06 00:57:04.114099 | orchestrator | Tuesday 06 May 2025 00:51:36 +0000 (0:00:00.284) 0:07:06.678 *********** 2025-05-06 00:57:04.114108 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.114116 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.114125 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.114133 | orchestrator | 2025-05-06 00:57:04.114142 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-06 00:57:04.114164 | orchestrator | Tuesday 06 May 2025 00:51:36 +0000 (0:00:00.279) 0:07:06.958 *********** 2025-05-06 00:57:04.114178 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.114193 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.114209 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.114225 | orchestrator | 2025-05-06 00:57:04.114240 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-06 00:57:04.114254 | orchestrator | Tuesday 06 May 2025 00:51:37 +0000 (0:00:00.422) 0:07:07.380 *********** 2025-05-06 00:57:04.114269 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.114281 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.114295 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.114308 | orchestrator | 2025-05-06 00:57:04.114319 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-06 00:57:04.114332 | orchestrator | Tuesday 06 May 2025 00:51:37 +0000 (0:00:00.306) 0:07:07.687 *********** 2025-05-06 00:57:04.114346 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.114362 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.114377 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.114390 | orchestrator | 2025-05-06 00:57:04.114453 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-06 00:57:04.114465 | orchestrator | Tuesday 06 May 2025 00:51:37 +0000 (0:00:00.311) 0:07:07.999 *********** 2025-05-06 00:57:04.114477 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.114488 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.114501 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.114512 | orchestrator | 2025-05-06 00:57:04.114523 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-06 00:57:04.114535 | orchestrator | Tuesday 06 May 2025 00:51:38 +0000 (0:00:00.264) 0:07:08.263 *********** 2025-05-06 00:57:04.114547 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.114559 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.114571 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.114582 | orchestrator | 2025-05-06 00:57:04.114594 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-06 00:57:04.114606 | orchestrator | Tuesday 06 May 2025 00:51:38 +0000 (0:00:00.432) 0:07:08.695 *********** 2025-05-06 00:57:04.114618 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.114630 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.114641 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.114675 | orchestrator | 2025-05-06 00:57:04.114688 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-06 00:57:04.114703 | orchestrator | Tuesday 06 May 2025 00:51:38 +0000 (0:00:00.272) 0:07:08.968 *********** 2025-05-06 00:57:04.114716 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.114729 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.114743 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.114756 | orchestrator | 2025-05-06 00:57:04.114769 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-06 00:57:04.114782 | orchestrator | Tuesday 06 May 2025 00:51:39 +0000 (0:00:00.270) 0:07:09.238 *********** 2025-05-06 00:57:04.114793 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-06 00:57:04.114803 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-06 00:57:04.114812 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-06 00:57:04.114822 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-06 00:57:04.114832 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.114842 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.114860 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-06 00:57:04.114871 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-06 00:57:04.114884 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.114905 | orchestrator | 2025-05-06 00:57:04.114917 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-06 00:57:04.114929 | orchestrator | Tuesday 06 May 2025 00:51:39 +0000 (0:00:00.321) 0:07:09.560 *********** 2025-05-06 00:57:04.114942 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-06 00:57:04.114960 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-06 00:57:04.114972 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-06 00:57:04.114983 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-06 00:57:04.114996 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.115007 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.115019 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-06 00:57:04.115032 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-06 00:57:04.115043 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.115055 | orchestrator | 2025-05-06 00:57:04.115069 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-06 00:57:04.115088 | orchestrator | Tuesday 06 May 2025 00:51:39 +0000 (0:00:00.617) 0:07:10.178 *********** 2025-05-06 00:57:04.115102 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.115120 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.115139 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.115153 | orchestrator | 2025-05-06 00:57:04.115170 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-06 00:57:04.115189 | orchestrator | Tuesday 06 May 2025 00:51:40 +0000 (0:00:00.355) 0:07:10.533 *********** 2025-05-06 00:57:04.115203 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.115221 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.115239 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.115256 | orchestrator | 2025-05-06 00:57:04.115271 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-06 00:57:04.115290 | orchestrator | Tuesday 06 May 2025 00:51:40 +0000 (0:00:00.404) 0:07:10.937 *********** 2025-05-06 00:57:04.115307 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.115320 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.115337 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.115356 | orchestrator | 2025-05-06 00:57:04.115370 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-06 00:57:04.115385 | orchestrator | Tuesday 06 May 2025 00:51:41 +0000 (0:00:00.400) 0:07:11.337 *********** 2025-05-06 00:57:04.115403 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.115418 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.115431 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.115447 | orchestrator | 2025-05-06 00:57:04.115462 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-06 00:57:04.115478 | orchestrator | Tuesday 06 May 2025 00:51:41 +0000 (0:00:00.307) 0:07:11.645 *********** 2025-05-06 00:57:04.115490 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.115505 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.115519 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.115535 | orchestrator | 2025-05-06 00:57:04.115551 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-06 00:57:04.115566 | orchestrator | Tuesday 06 May 2025 00:51:42 +0000 (0:00:00.588) 0:07:12.234 *********** 2025-05-06 00:57:04.115579 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.115592 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.115606 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.115621 | orchestrator | 2025-05-06 00:57:04.115705 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-06 00:57:04.115721 | orchestrator | Tuesday 06 May 2025 00:51:42 +0000 (0:00:00.320) 0:07:12.554 *********** 2025-05-06 00:57:04.115742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.115756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.115770 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.115783 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.115797 | orchestrator | 2025-05-06 00:57:04.115809 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-06 00:57:04.115820 | orchestrator | Tuesday 06 May 2025 00:51:42 +0000 (0:00:00.412) 0:07:12.966 *********** 2025-05-06 00:57:04.115832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.115844 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.115856 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.115868 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.115880 | orchestrator | 2025-05-06 00:57:04.115892 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-06 00:57:04.115903 | orchestrator | Tuesday 06 May 2025 00:51:43 +0000 (0:00:00.382) 0:07:13.349 *********** 2025-05-06 00:57:04.115917 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.115929 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.115941 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.115953 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.115966 | orchestrator | 2025-05-06 00:57:04.115978 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-06 00:57:04.115992 | orchestrator | Tuesday 06 May 2025 00:51:43 +0000 (0:00:00.410) 0:07:13.759 *********** 2025-05-06 00:57:04.116004 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.116017 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.116030 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.116042 | orchestrator | 2025-05-06 00:57:04.116054 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-06 00:57:04.116067 | orchestrator | Tuesday 06 May 2025 00:51:44 +0000 (0:00:00.578) 0:07:14.337 *********** 2025-05-06 00:57:04.116081 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-06 00:57:04.116094 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.116105 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-06 00:57:04.116119 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.116131 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-06 00:57:04.116144 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.116156 | orchestrator | 2025-05-06 00:57:04.116168 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-06 00:57:04.116180 | orchestrator | Tuesday 06 May 2025 00:51:44 +0000 (0:00:00.564) 0:07:14.902 *********** 2025-05-06 00:57:04.116192 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.116205 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.116218 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.116230 | orchestrator | 2025-05-06 00:57:04.116240 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-06 00:57:04.116249 | orchestrator | Tuesday 06 May 2025 00:51:44 +0000 (0:00:00.313) 0:07:15.216 *********** 2025-05-06 00:57:04.116258 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.116266 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.116276 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.116286 | orchestrator | 2025-05-06 00:57:04.116295 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-06 00:57:04.116306 | orchestrator | Tuesday 06 May 2025 00:51:45 +0000 (0:00:00.331) 0:07:15.547 *********** 2025-05-06 00:57:04.116316 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-06 00:57:04.116327 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.116337 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-06 00:57:04.116349 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.116370 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-06 00:57:04.116380 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.116391 | orchestrator | 2025-05-06 00:57:04.116401 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-06 00:57:04.116412 | orchestrator | Tuesday 06 May 2025 00:51:46 +0000 (0:00:00.982) 0:07:16.530 *********** 2025-05-06 00:57:04.116423 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-06 00:57:04.116434 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.116445 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-06 00:57:04.116455 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.116466 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-06 00:57:04.116477 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.116488 | orchestrator | 2025-05-06 00:57:04.116498 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-06 00:57:04.116509 | orchestrator | Tuesday 06 May 2025 00:51:46 +0000 (0:00:00.344) 0:07:16.874 *********** 2025-05-06 00:57:04.116519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.116533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.116550 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.116563 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-06 00:57:04.116583 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.116599 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-06 00:57:04.116723 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-06 00:57:04.116742 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.116755 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-06 00:57:04.116770 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-06 00:57:04.116784 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-06 00:57:04.116796 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.116810 | orchestrator | 2025-05-06 00:57:04.116830 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-06 00:57:04.116841 | orchestrator | Tuesday 06 May 2025 00:51:47 +0000 (0:00:00.644) 0:07:17.519 *********** 2025-05-06 00:57:04.116854 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.116869 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.116884 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.116895 | orchestrator | 2025-05-06 00:57:04.116910 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-06 00:57:04.116924 | orchestrator | Tuesday 06 May 2025 00:51:48 +0000 (0:00:00.774) 0:07:18.294 *********** 2025-05-06 00:57:04.116937 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-06 00:57:04.116948 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.116963 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-06 00:57:04.116972 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.116985 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-06 00:57:04.116999 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.117020 | orchestrator | 2025-05-06 00:57:04.117034 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-06 00:57:04.117048 | orchestrator | Tuesday 06 May 2025 00:51:48 +0000 (0:00:00.570) 0:07:18.865 *********** 2025-05-06 00:57:04.117063 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.117076 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.117088 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.117100 | orchestrator | 2025-05-06 00:57:04.117114 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-06 00:57:04.117137 | orchestrator | Tuesday 06 May 2025 00:51:49 +0000 (0:00:00.780) 0:07:19.645 *********** 2025-05-06 00:57:04.117145 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.117153 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.117160 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.117168 | orchestrator | 2025-05-06 00:57:04.117175 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-05-06 00:57:04.117182 | orchestrator | Tuesday 06 May 2025 00:51:49 +0000 (0:00:00.531) 0:07:20.177 *********** 2025-05-06 00:57:04.117190 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.117198 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.117206 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.117214 | orchestrator | 2025-05-06 00:57:04.117221 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-05-06 00:57:04.117234 | orchestrator | Tuesday 06 May 2025 00:51:50 +0000 (0:00:00.609) 0:07:20.786 *********** 2025-05-06 00:57:04.117243 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-06 00:57:04.117251 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-06 00:57:04.117259 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-06 00:57:04.117266 | orchestrator | 2025-05-06 00:57:04.117274 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-05-06 00:57:04.117281 | orchestrator | Tuesday 06 May 2025 00:51:51 +0000 (0:00:00.644) 0:07:21.430 *********** 2025-05-06 00:57:04.117289 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.117296 | orchestrator | 2025-05-06 00:57:04.117306 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-05-06 00:57:04.117311 | orchestrator | Tuesday 06 May 2025 00:51:51 +0000 (0:00:00.517) 0:07:21.948 *********** 2025-05-06 00:57:04.117316 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.117321 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.117326 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.117331 | orchestrator | 2025-05-06 00:57:04.117336 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-05-06 00:57:04.117341 | orchestrator | Tuesday 06 May 2025 00:51:51 +0000 (0:00:00.280) 0:07:22.228 *********** 2025-05-06 00:57:04.117346 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.117350 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.117355 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.117360 | orchestrator | 2025-05-06 00:57:04.117365 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-05-06 00:57:04.117370 | orchestrator | Tuesday 06 May 2025 00:51:52 +0000 (0:00:00.368) 0:07:22.597 *********** 2025-05-06 00:57:04.117375 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.117380 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.117384 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.117389 | orchestrator | 2025-05-06 00:57:04.117394 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-05-06 00:57:04.117399 | orchestrator | Tuesday 06 May 2025 00:51:52 +0000 (0:00:00.248) 0:07:22.845 *********** 2025-05-06 00:57:04.117404 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.117409 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.117413 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.117418 | orchestrator | 2025-05-06 00:57:04.117423 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-05-06 00:57:04.117428 | orchestrator | Tuesday 06 May 2025 00:51:52 +0000 (0:00:00.252) 0:07:23.097 *********** 2025-05-06 00:57:04.117433 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.117438 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.117442 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.117447 | orchestrator | 2025-05-06 00:57:04.117452 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-05-06 00:57:04.117495 | orchestrator | Tuesday 06 May 2025 00:51:53 +0000 (0:00:00.603) 0:07:23.701 *********** 2025-05-06 00:57:04.117503 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.117510 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.117526 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.117537 | orchestrator | 2025-05-06 00:57:04.117545 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-05-06 00:57:04.117552 | orchestrator | Tuesday 06 May 2025 00:51:53 +0000 (0:00:00.435) 0:07:24.136 *********** 2025-05-06 00:57:04.117560 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-06 00:57:04.117572 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-06 00:57:04.117580 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-06 00:57:04.117588 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-06 00:57:04.117596 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-06 00:57:04.117603 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-06 00:57:04.117611 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-06 00:57:04.117617 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-06 00:57:04.117625 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-06 00:57:04.117632 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-06 00:57:04.117640 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-06 00:57:04.117667 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-06 00:57:04.117676 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-06 00:57:04.117683 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-06 00:57:04.117691 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-06 00:57:04.117697 | orchestrator | 2025-05-06 00:57:04.117703 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-05-06 00:57:04.117707 | orchestrator | Tuesday 06 May 2025 00:51:56 +0000 (0:00:03.028) 0:07:27.165 *********** 2025-05-06 00:57:04.117712 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.117717 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.117722 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.117727 | orchestrator | 2025-05-06 00:57:04.117732 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-05-06 00:57:04.117740 | orchestrator | Tuesday 06 May 2025 00:51:57 +0000 (0:00:00.257) 0:07:27.422 *********** 2025-05-06 00:57:04.117745 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.117750 | orchestrator | 2025-05-06 00:57:04.117755 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-05-06 00:57:04.117759 | orchestrator | Tuesday 06 May 2025 00:51:57 +0000 (0:00:00.603) 0:07:28.026 *********** 2025-05-06 00:57:04.117764 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-06 00:57:04.117769 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-06 00:57:04.117774 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-06 00:57:04.117779 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-06 00:57:04.117784 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-06 00:57:04.117789 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-06 00:57:04.117799 | orchestrator | 2025-05-06 00:57:04.117804 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-05-06 00:57:04.117809 | orchestrator | Tuesday 06 May 2025 00:51:58 +0000 (0:00:00.948) 0:07:28.975 *********** 2025-05-06 00:57:04.117813 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:57:04.117819 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-06 00:57:04.117824 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-06 00:57:04.117828 | orchestrator | 2025-05-06 00:57:04.117833 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-05-06 00:57:04.117838 | orchestrator | Tuesday 06 May 2025 00:52:00 +0000 (0:00:01.810) 0:07:30.786 *********** 2025-05-06 00:57:04.117843 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-06 00:57:04.117848 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-06 00:57:04.117853 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.117861 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-06 00:57:04.117866 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-06 00:57:04.117871 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.117875 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-06 00:57:04.117880 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-06 00:57:04.117885 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.117890 | orchestrator | 2025-05-06 00:57:04.117895 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-05-06 00:57:04.117899 | orchestrator | Tuesday 06 May 2025 00:52:02 +0000 (0:00:01.474) 0:07:32.261 *********** 2025-05-06 00:57:04.117904 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-06 00:57:04.117909 | orchestrator | 2025-05-06 00:57:04.117914 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-05-06 00:57:04.117937 | orchestrator | Tuesday 06 May 2025 00:52:04 +0000 (0:00:02.746) 0:07:35.007 *********** 2025-05-06 00:57:04.117942 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.117947 | orchestrator | 2025-05-06 00:57:04.117952 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-05-06 00:57:04.117957 | orchestrator | Tuesday 06 May 2025 00:52:05 +0000 (0:00:00.713) 0:07:35.720 *********** 2025-05-06 00:57:04.117962 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.117967 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.117972 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.117977 | orchestrator | 2025-05-06 00:57:04.117982 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-05-06 00:57:04.117987 | orchestrator | Tuesday 06 May 2025 00:52:05 +0000 (0:00:00.297) 0:07:36.018 *********** 2025-05-06 00:57:04.117992 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.117997 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.118002 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.118011 | orchestrator | 2025-05-06 00:57:04.118032 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-05-06 00:57:04.118038 | orchestrator | Tuesday 06 May 2025 00:52:06 +0000 (0:00:00.300) 0:07:36.319 *********** 2025-05-06 00:57:04.118043 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.118048 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.118052 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.118057 | orchestrator | 2025-05-06 00:57:04.118062 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-05-06 00:57:04.118067 | orchestrator | Tuesday 06 May 2025 00:52:06 +0000 (0:00:00.299) 0:07:36.619 *********** 2025-05-06 00:57:04.118072 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.118076 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.118085 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.118090 | orchestrator | 2025-05-06 00:57:04.118095 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-05-06 00:57:04.118100 | orchestrator | Tuesday 06 May 2025 00:52:06 +0000 (0:00:00.548) 0:07:37.167 *********** 2025-05-06 00:57:04.118105 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.118109 | orchestrator | 2025-05-06 00:57:04.118114 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-05-06 00:57:04.118119 | orchestrator | Tuesday 06 May 2025 00:52:07 +0000 (0:00:00.527) 0:07:37.694 *********** 2025-05-06 00:57:04.118124 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8a0f4265-dd5d-556c-ac35-a800ef93314e', 'data_vg': 'ceph-8a0f4265-dd5d-556c-ac35-a800ef93314e'}) 2025-05-06 00:57:04.118130 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-83550523-1175-5b11-b232-63a45b36e32a', 'data_vg': 'ceph-83550523-1175-5b11-b232-63a45b36e32a'}) 2025-05-06 00:57:04.118135 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5100a9d2-ae69-5e7a-989d-a5d69986fee9', 'data_vg': 'ceph-5100a9d2-ae69-5e7a-989d-a5d69986fee9'}) 2025-05-06 00:57:04.118140 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-108592b4-5156-5470-952e-be389a9738cf', 'data_vg': 'ceph-108592b4-5156-5470-952e-be389a9738cf'}) 2025-05-06 00:57:04.118145 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-376b0c1a-f7d0-50df-9bf6-f05e021d85c5', 'data_vg': 'ceph-376b0c1a-f7d0-50df-9bf6-f05e021d85c5'}) 2025-05-06 00:57:04.118150 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2fbee355-69b3-5569-a73a-eae1d5356d34', 'data_vg': 'ceph-2fbee355-69b3-5569-a73a-eae1d5356d34'}) 2025-05-06 00:57:04.118155 | orchestrator | 2025-05-06 00:57:04.118159 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-06 00:57:04.118164 | orchestrator | Tuesday 06 May 2025 00:52:46 +0000 (0:00:39.160) 0:08:16.855 *********** 2025-05-06 00:57:04.118169 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.118174 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.118179 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.118183 | orchestrator | 2025-05-06 00:57:04.118188 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-05-06 00:57:04.118193 | orchestrator | Tuesday 06 May 2025 00:52:47 +0000 (0:00:00.473) 0:08:17.328 *********** 2025-05-06 00:57:04.118198 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.118202 | orchestrator | 2025-05-06 00:57:04.118207 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-05-06 00:57:04.118212 | orchestrator | Tuesday 06 May 2025 00:52:47 +0000 (0:00:00.574) 0:08:17.903 *********** 2025-05-06 00:57:04.118217 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.118221 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.118230 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.118235 | orchestrator | 2025-05-06 00:57:04.118240 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-05-06 00:57:04.118245 | orchestrator | Tuesday 06 May 2025 00:52:48 +0000 (0:00:00.602) 0:08:18.505 *********** 2025-05-06 00:57:04.118249 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.118254 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.118259 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.118264 | orchestrator | 2025-05-06 00:57:04.118269 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-05-06 00:57:04.118274 | orchestrator | Tuesday 06 May 2025 00:52:50 +0000 (0:00:01.951) 0:08:20.457 *********** 2025-05-06 00:57:04.118290 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.118295 | orchestrator | 2025-05-06 00:57:04.118300 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-05-06 00:57:04.118309 | orchestrator | Tuesday 06 May 2025 00:52:50 +0000 (0:00:00.557) 0:08:21.014 *********** 2025-05-06 00:57:04.118314 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.118319 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.118323 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.118328 | orchestrator | 2025-05-06 00:57:04.118335 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-05-06 00:57:04.118340 | orchestrator | Tuesday 06 May 2025 00:52:52 +0000 (0:00:01.397) 0:08:22.411 *********** 2025-05-06 00:57:04.118345 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.118350 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.118354 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.118359 | orchestrator | 2025-05-06 00:57:04.118364 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-05-06 00:57:04.118369 | orchestrator | Tuesday 06 May 2025 00:52:53 +0000 (0:00:01.200) 0:08:23.612 *********** 2025-05-06 00:57:04.118373 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.118378 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.118383 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.118388 | orchestrator | 2025-05-06 00:57:04.118392 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-05-06 00:57:04.118397 | orchestrator | Tuesday 06 May 2025 00:52:55 +0000 (0:00:01.645) 0:08:25.258 *********** 2025-05-06 00:57:04.118402 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.118406 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.118411 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.118416 | orchestrator | 2025-05-06 00:57:04.118421 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-05-06 00:57:04.118426 | orchestrator | Tuesday 06 May 2025 00:52:55 +0000 (0:00:00.348) 0:08:25.607 *********** 2025-05-06 00:57:04.118430 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.118435 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.118440 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.118445 | orchestrator | 2025-05-06 00:57:04.118449 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-05-06 00:57:04.118454 | orchestrator | Tuesday 06 May 2025 00:52:55 +0000 (0:00:00.582) 0:08:26.189 *********** 2025-05-06 00:57:04.118459 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-06 00:57:04.118464 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-05-06 00:57:04.118469 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-05-06 00:57:04.118474 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-05-06 00:57:04.118479 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-05-06 00:57:04.118483 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-05-06 00:57:04.118488 | orchestrator | 2025-05-06 00:57:04.118493 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-05-06 00:57:04.118498 | orchestrator | Tuesday 06 May 2025 00:52:56 +0000 (0:00:00.926) 0:08:27.116 *********** 2025-05-06 00:57:04.118502 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-06 00:57:04.118507 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-05-06 00:57:04.118512 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-05-06 00:57:04.118517 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-05-06 00:57:04.118522 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-05-06 00:57:04.118526 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-05-06 00:57:04.118531 | orchestrator | 2025-05-06 00:57:04.118536 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-05-06 00:57:04.118541 | orchestrator | Tuesday 06 May 2025 00:53:00 +0000 (0:00:03.542) 0:08:30.658 *********** 2025-05-06 00:57:04.118545 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.118551 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.118556 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-06 00:57:04.118560 | orchestrator | 2025-05-06 00:57:04.118565 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-05-06 00:57:04.118573 | orchestrator | Tuesday 06 May 2025 00:53:03 +0000 (0:00:03.069) 0:08:33.727 *********** 2025-05-06 00:57:04.118578 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.118583 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.118587 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-05-06 00:57:04.118592 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-06 00:57:04.118597 | orchestrator | 2025-05-06 00:57:04.118602 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-05-06 00:57:04.118607 | orchestrator | Tuesday 06 May 2025 00:53:16 +0000 (0:00:12.651) 0:08:46.379 *********** 2025-05-06 00:57:04.118612 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.118617 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.118621 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.118626 | orchestrator | 2025-05-06 00:57:04.118631 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-05-06 00:57:04.118636 | orchestrator | Tuesday 06 May 2025 00:53:16 +0000 (0:00:00.441) 0:08:46.821 *********** 2025-05-06 00:57:04.118640 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.118645 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.118661 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.118666 | orchestrator | 2025-05-06 00:57:04.118671 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-06 00:57:04.118676 | orchestrator | Tuesday 06 May 2025 00:53:17 +0000 (0:00:01.173) 0:08:47.995 *********** 2025-05-06 00:57:04.118680 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.118685 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.118690 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.118695 | orchestrator | 2025-05-06 00:57:04.118700 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-06 00:57:04.118704 | orchestrator | Tuesday 06 May 2025 00:53:18 +0000 (0:00:00.959) 0:08:48.955 *********** 2025-05-06 00:57:04.118720 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.118726 | orchestrator | 2025-05-06 00:57:04.118731 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-05-06 00:57:04.118736 | orchestrator | Tuesday 06 May 2025 00:53:19 +0000 (0:00:00.596) 0:08:49.551 *********** 2025-05-06 00:57:04.118740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.118745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.118750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.118755 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.118759 | orchestrator | 2025-05-06 00:57:04.118764 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-05-06 00:57:04.118769 | orchestrator | Tuesday 06 May 2025 00:53:19 +0000 (0:00:00.397) 0:08:49.949 *********** 2025-05-06 00:57:04.118774 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.118779 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.118783 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.118788 | orchestrator | 2025-05-06 00:57:04.118793 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-05-06 00:57:04.118798 | orchestrator | Tuesday 06 May 2025 00:53:20 +0000 (0:00:00.313) 0:08:50.262 *********** 2025-05-06 00:57:04.118802 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.118807 | orchestrator | 2025-05-06 00:57:04.118812 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-05-06 00:57:04.118817 | orchestrator | Tuesday 06 May 2025 00:53:20 +0000 (0:00:00.456) 0:08:50.719 *********** 2025-05-06 00:57:04.118821 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.118826 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.118831 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.118839 | orchestrator | 2025-05-06 00:57:04.118847 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-05-06 00:57:04.118852 | orchestrator | Tuesday 06 May 2025 00:53:20 +0000 (0:00:00.314) 0:08:51.034 *********** 2025-05-06 00:57:04.118857 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.118862 | orchestrator | 2025-05-06 00:57:04.118867 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-05-06 00:57:04.118871 | orchestrator | Tuesday 06 May 2025 00:53:21 +0000 (0:00:00.236) 0:08:51.271 *********** 2025-05-06 00:57:04.118876 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.118881 | orchestrator | 2025-05-06 00:57:04.118890 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-06 00:57:04.118895 | orchestrator | Tuesday 06 May 2025 00:53:21 +0000 (0:00:00.244) 0:08:51.515 *********** 2025-05-06 00:57:04.118900 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.118904 | orchestrator | 2025-05-06 00:57:04.118909 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-05-06 00:57:04.118914 | orchestrator | Tuesday 06 May 2025 00:53:21 +0000 (0:00:00.131) 0:08:51.646 *********** 2025-05-06 00:57:04.118919 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.118923 | orchestrator | 2025-05-06 00:57:04.118928 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-05-06 00:57:04.118933 | orchestrator | Tuesday 06 May 2025 00:53:21 +0000 (0:00:00.232) 0:08:51.879 *********** 2025-05-06 00:57:04.118938 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.118945 | orchestrator | 2025-05-06 00:57:04.118950 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-05-06 00:57:04.118955 | orchestrator | Tuesday 06 May 2025 00:53:21 +0000 (0:00:00.246) 0:08:52.126 *********** 2025-05-06 00:57:04.118960 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.118964 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.118969 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.118974 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.118979 | orchestrator | 2025-05-06 00:57:04.118984 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-05-06 00:57:04.118988 | orchestrator | Tuesday 06 May 2025 00:53:22 +0000 (0:00:00.383) 0:08:52.510 *********** 2025-05-06 00:57:04.118993 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.118998 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.119003 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.119008 | orchestrator | 2025-05-06 00:57:04.119012 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-05-06 00:57:04.119017 | orchestrator | Tuesday 06 May 2025 00:53:22 +0000 (0:00:00.523) 0:08:53.033 *********** 2025-05-06 00:57:04.119022 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.119027 | orchestrator | 2025-05-06 00:57:04.119032 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-05-06 00:57:04.119036 | orchestrator | Tuesday 06 May 2025 00:53:23 +0000 (0:00:00.236) 0:08:53.269 *********** 2025-05-06 00:57:04.119041 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.119046 | orchestrator | 2025-05-06 00:57:04.119051 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-06 00:57:04.119055 | orchestrator | Tuesday 06 May 2025 00:53:23 +0000 (0:00:00.223) 0:08:53.493 *********** 2025-05-06 00:57:04.119060 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.119065 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.119070 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.119075 | orchestrator | 2025-05-06 00:57:04.119080 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-06 00:57:04.119085 | orchestrator | 2025-05-06 00:57:04.119089 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-06 00:57:04.119094 | orchestrator | Tuesday 06 May 2025 00:53:26 +0000 (0:00:03.059) 0:08:56.553 *********** 2025-05-06 00:57:04.119102 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.119107 | orchestrator | 2025-05-06 00:57:04.119112 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-06 00:57:04.119126 | orchestrator | Tuesday 06 May 2025 00:53:27 +0000 (0:00:01.334) 0:08:57.887 *********** 2025-05-06 00:57:04.119132 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.119137 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.119142 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.119146 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.119151 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.119156 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.119161 | orchestrator | 2025-05-06 00:57:04.119166 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-06 00:57:04.119171 | orchestrator | Tuesday 06 May 2025 00:53:28 +0000 (0:00:01.068) 0:08:58.956 *********** 2025-05-06 00:57:04.119175 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.119180 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.119185 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.119190 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.119195 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.119200 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.119204 | orchestrator | 2025-05-06 00:57:04.119209 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-06 00:57:04.119214 | orchestrator | Tuesday 06 May 2025 00:53:29 +0000 (0:00:01.155) 0:09:00.111 *********** 2025-05-06 00:57:04.119219 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.119224 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.119229 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.119233 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.119238 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.119243 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.119248 | orchestrator | 2025-05-06 00:57:04.119253 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-06 00:57:04.119257 | orchestrator | Tuesday 06 May 2025 00:53:31 +0000 (0:00:01.255) 0:09:01.367 *********** 2025-05-06 00:57:04.119262 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.119267 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.119272 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.119277 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.119281 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.119286 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.119291 | orchestrator | 2025-05-06 00:57:04.119298 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-06 00:57:04.119303 | orchestrator | Tuesday 06 May 2025 00:53:32 +0000 (0:00:01.007) 0:09:02.374 *********** 2025-05-06 00:57:04.119308 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.119313 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.119318 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.119323 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.119327 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.119332 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.119337 | orchestrator | 2025-05-06 00:57:04.119342 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-06 00:57:04.119347 | orchestrator | Tuesday 06 May 2025 00:53:32 +0000 (0:00:00.850) 0:09:03.225 *********** 2025-05-06 00:57:04.119351 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.119356 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.119361 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.119366 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.119371 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.119376 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.119385 | orchestrator | 2025-05-06 00:57:04.119390 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-06 00:57:04.119395 | orchestrator | Tuesday 06 May 2025 00:53:33 +0000 (0:00:00.596) 0:09:03.821 *********** 2025-05-06 00:57:04.119400 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.119404 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.119409 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.119414 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.119419 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.119424 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.119431 | orchestrator | 2025-05-06 00:57:04.119436 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-06 00:57:04.119441 | orchestrator | Tuesday 06 May 2025 00:53:34 +0000 (0:00:00.825) 0:09:04.646 *********** 2025-05-06 00:57:04.119446 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.119451 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.119455 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.119460 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.119465 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.119470 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.119475 | orchestrator | 2025-05-06 00:57:04.119480 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-06 00:57:04.119484 | orchestrator | Tuesday 06 May 2025 00:53:35 +0000 (0:00:00.662) 0:09:05.309 *********** 2025-05-06 00:57:04.119489 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.119494 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.119499 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.119504 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.119508 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.119513 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.119518 | orchestrator | 2025-05-06 00:57:04.119523 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-06 00:57:04.119528 | orchestrator | Tuesday 06 May 2025 00:53:36 +0000 (0:00:00.972) 0:09:06.281 *********** 2025-05-06 00:57:04.119533 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.119537 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.119542 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.119547 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.119552 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.119557 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.119562 | orchestrator | 2025-05-06 00:57:04.119569 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-06 00:57:04.119574 | orchestrator | Tuesday 06 May 2025 00:53:36 +0000 (0:00:00.602) 0:09:06.883 *********** 2025-05-06 00:57:04.119579 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.119584 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.119589 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.119594 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.119599 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.119603 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.119608 | orchestrator | 2025-05-06 00:57:04.119622 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-06 00:57:04.119628 | orchestrator | Tuesday 06 May 2025 00:53:38 +0000 (0:00:01.356) 0:09:08.240 *********** 2025-05-06 00:57:04.119633 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.119638 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.119642 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.119658 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.119664 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.119669 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.119674 | orchestrator | 2025-05-06 00:57:04.119678 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-06 00:57:04.119683 | orchestrator | Tuesday 06 May 2025 00:53:38 +0000 (0:00:00.647) 0:09:08.888 *********** 2025-05-06 00:57:04.119692 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.119696 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.119701 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.119706 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.119711 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.119716 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.119720 | orchestrator | 2025-05-06 00:57:04.119725 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-06 00:57:04.119730 | orchestrator | Tuesday 06 May 2025 00:53:39 +0000 (0:00:01.236) 0:09:10.124 *********** 2025-05-06 00:57:04.119735 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.119740 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.119744 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.119749 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.119754 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.119759 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.119764 | orchestrator | 2025-05-06 00:57:04.119768 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-06 00:57:04.119773 | orchestrator | Tuesday 06 May 2025 00:53:40 +0000 (0:00:00.755) 0:09:10.880 *********** 2025-05-06 00:57:04.119778 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.119783 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.119788 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.119793 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.119797 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.119802 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.119807 | orchestrator | 2025-05-06 00:57:04.119812 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-06 00:57:04.119817 | orchestrator | Tuesday 06 May 2025 00:53:41 +0000 (0:00:00.879) 0:09:11.759 *********** 2025-05-06 00:57:04.119822 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.119826 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.119831 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.119836 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.119844 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.119849 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.119854 | orchestrator | 2025-05-06 00:57:04.119859 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-06 00:57:04.119863 | orchestrator | Tuesday 06 May 2025 00:53:42 +0000 (0:00:00.643) 0:09:12.403 *********** 2025-05-06 00:57:04.119868 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.119873 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.119878 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.119883 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.119888 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.119892 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.119897 | orchestrator | 2025-05-06 00:57:04.119902 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-06 00:57:04.119907 | orchestrator | Tuesday 06 May 2025 00:53:42 +0000 (0:00:00.804) 0:09:13.207 *********** 2025-05-06 00:57:04.119912 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.119917 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.119921 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.119926 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.119931 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.119936 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.119941 | orchestrator | 2025-05-06 00:57:04.119946 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-06 00:57:04.119950 | orchestrator | Tuesday 06 May 2025 00:53:43 +0000 (0:00:00.640) 0:09:13.847 *********** 2025-05-06 00:57:04.119955 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.119960 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.119965 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.119969 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.119978 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.119983 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.119988 | orchestrator | 2025-05-06 00:57:04.119992 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-06 00:57:04.119997 | orchestrator | Tuesday 06 May 2025 00:53:44 +0000 (0:00:01.062) 0:09:14.910 *********** 2025-05-06 00:57:04.120002 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.120007 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.120012 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.120016 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.120021 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.120026 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.120030 | orchestrator | 2025-05-06 00:57:04.120038 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-06 00:57:04.120043 | orchestrator | Tuesday 06 May 2025 00:53:45 +0000 (0:00:00.686) 0:09:15.597 *********** 2025-05-06 00:57:04.120048 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.120053 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.120057 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.120062 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.120067 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.120072 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.120077 | orchestrator | 2025-05-06 00:57:04.120082 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-06 00:57:04.120086 | orchestrator | Tuesday 06 May 2025 00:53:46 +0000 (0:00:01.031) 0:09:16.628 *********** 2025-05-06 00:57:04.120091 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.120096 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.120101 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.120106 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.120121 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.120126 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.120131 | orchestrator | 2025-05-06 00:57:04.120136 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-06 00:57:04.120141 | orchestrator | Tuesday 06 May 2025 00:53:47 +0000 (0:00:00.696) 0:09:17.325 *********** 2025-05-06 00:57:04.120146 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.120151 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.120155 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.120160 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.120165 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.120170 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.120175 | orchestrator | 2025-05-06 00:57:04.120180 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-06 00:57:04.120185 | orchestrator | Tuesday 06 May 2025 00:53:48 +0000 (0:00:01.010) 0:09:18.335 *********** 2025-05-06 00:57:04.120189 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.120194 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.120199 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.120204 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.120209 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.120214 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.120221 | orchestrator | 2025-05-06 00:57:04.120226 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-06 00:57:04.120231 | orchestrator | Tuesday 06 May 2025 00:53:48 +0000 (0:00:00.649) 0:09:18.985 *********** 2025-05-06 00:57:04.120236 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.120241 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.120245 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.120250 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.120255 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.120260 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.120265 | orchestrator | 2025-05-06 00:57:04.120270 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-06 00:57:04.120279 | orchestrator | Tuesday 06 May 2025 00:53:49 +0000 (0:00:00.815) 0:09:19.800 *********** 2025-05-06 00:57:04.120284 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.120288 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.120293 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.120298 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.120303 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.120308 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.120313 | orchestrator | 2025-05-06 00:57:04.120318 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-06 00:57:04.120322 | orchestrator | Tuesday 06 May 2025 00:53:50 +0000 (0:00:00.630) 0:09:20.431 *********** 2025-05-06 00:57:04.120327 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.120332 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.120337 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.120342 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.120347 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.120352 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.120356 | orchestrator | 2025-05-06 00:57:04.120361 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-06 00:57:04.120366 | orchestrator | Tuesday 06 May 2025 00:53:51 +0000 (0:00:00.925) 0:09:21.356 *********** 2025-05-06 00:57:04.120371 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.120376 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.120381 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.120386 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.120390 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.120395 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.120400 | orchestrator | 2025-05-06 00:57:04.120405 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-06 00:57:04.120410 | orchestrator | Tuesday 06 May 2025 00:53:51 +0000 (0:00:00.623) 0:09:21.980 *********** 2025-05-06 00:57:04.120415 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.120420 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.120425 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.120430 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.120434 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.120439 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.120444 | orchestrator | 2025-05-06 00:57:04.120449 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-06 00:57:04.120454 | orchestrator | Tuesday 06 May 2025 00:53:52 +0000 (0:00:00.876) 0:09:22.857 *********** 2025-05-06 00:57:04.120459 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.120463 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.120468 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.120473 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.120478 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.120483 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.120488 | orchestrator | 2025-05-06 00:57:04.120493 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-06 00:57:04.120497 | orchestrator | Tuesday 06 May 2025 00:53:53 +0000 (0:00:00.749) 0:09:23.606 *********** 2025-05-06 00:57:04.120505 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.120510 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.120515 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.120520 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.120525 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.120529 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.120534 | orchestrator | 2025-05-06 00:57:04.120539 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-06 00:57:04.120544 | orchestrator | Tuesday 06 May 2025 00:53:54 +0000 (0:00:01.049) 0:09:24.656 *********** 2025-05-06 00:57:04.120552 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.120557 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.120562 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.120567 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.120572 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.120577 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.120582 | orchestrator | 2025-05-06 00:57:04.120596 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-06 00:57:04.120602 | orchestrator | Tuesday 06 May 2025 00:53:55 +0000 (0:00:00.609) 0:09:25.265 *********** 2025-05-06 00:57:04.120607 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-06 00:57:04.120612 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-06 00:57:04.120617 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.120622 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-06 00:57:04.120626 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-06 00:57:04.120631 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.120636 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-06 00:57:04.120641 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-06 00:57:04.120678 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.120691 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-06 00:57:04.120699 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-06 00:57:04.120710 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.120722 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-06 00:57:04.120730 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-06 00:57:04.120737 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.120745 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-06 00:57:04.120753 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-06 00:57:04.120760 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.120768 | orchestrator | 2025-05-06 00:57:04.120775 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-06 00:57:04.120783 | orchestrator | Tuesday 06 May 2025 00:53:55 +0000 (0:00:00.909) 0:09:26.174 *********** 2025-05-06 00:57:04.120791 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-06 00:57:04.120804 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-06 00:57:04.120809 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.120814 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-06 00:57:04.120819 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-06 00:57:04.120823 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.120828 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-06 00:57:04.120833 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-06 00:57:04.120838 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.120842 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-06 00:57:04.120847 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-06 00:57:04.120852 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-06 00:57:04.120857 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-06 00:57:04.120861 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.120866 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.120871 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-06 00:57:04.120876 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-06 00:57:04.120881 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.120885 | orchestrator | 2025-05-06 00:57:04.120890 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-06 00:57:04.120895 | orchestrator | Tuesday 06 May 2025 00:53:56 +0000 (0:00:00.590) 0:09:26.765 *********** 2025-05-06 00:57:04.120905 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.120910 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.120915 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.120920 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.120925 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.120929 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.120934 | orchestrator | 2025-05-06 00:57:04.120939 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-06 00:57:04.120944 | orchestrator | Tuesday 06 May 2025 00:53:57 +0000 (0:00:00.635) 0:09:27.400 *********** 2025-05-06 00:57:04.120949 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.120953 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.120958 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.120963 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.120968 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.120972 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.120977 | orchestrator | 2025-05-06 00:57:04.120982 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-06 00:57:04.120987 | orchestrator | Tuesday 06 May 2025 00:53:57 +0000 (0:00:00.473) 0:09:27.874 *********** 2025-05-06 00:57:04.120992 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.120997 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.121001 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.121006 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.121011 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.121016 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.121020 | orchestrator | 2025-05-06 00:57:04.121025 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-06 00:57:04.121030 | orchestrator | Tuesday 06 May 2025 00:53:58 +0000 (0:00:00.791) 0:09:28.665 *********** 2025-05-06 00:57:04.121035 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.121040 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.121044 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.121049 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.121054 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.121059 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.121064 | orchestrator | 2025-05-06 00:57:04.121068 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-06 00:57:04.121076 | orchestrator | Tuesday 06 May 2025 00:53:58 +0000 (0:00:00.557) 0:09:29.222 *********** 2025-05-06 00:57:04.121081 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.121085 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.121090 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.121109 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.121114 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.121119 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.121124 | orchestrator | 2025-05-06 00:57:04.121129 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-06 00:57:04.121134 | orchestrator | Tuesday 06 May 2025 00:53:59 +0000 (0:00:00.830) 0:09:30.053 *********** 2025-05-06 00:57:04.121139 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.121143 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.121148 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.121153 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.121157 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.121162 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.121167 | orchestrator | 2025-05-06 00:57:04.121172 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-06 00:57:04.121176 | orchestrator | Tuesday 06 May 2025 00:54:00 +0000 (0:00:00.659) 0:09:30.712 *********** 2025-05-06 00:57:04.121181 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-06 00:57:04.121190 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-06 00:57:04.121195 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-06 00:57:04.121200 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.121205 | orchestrator | 2025-05-06 00:57:04.121210 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-06 00:57:04.121215 | orchestrator | Tuesday 06 May 2025 00:54:00 +0000 (0:00:00.409) 0:09:31.121 *********** 2025-05-06 00:57:04.121220 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-06 00:57:04.121224 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-06 00:57:04.121229 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-06 00:57:04.121234 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.121239 | orchestrator | 2025-05-06 00:57:04.121244 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-06 00:57:04.121248 | orchestrator | Tuesday 06 May 2025 00:54:01 +0000 (0:00:00.529) 0:09:31.651 *********** 2025-05-06 00:57:04.121253 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-06 00:57:04.121258 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-06 00:57:04.121263 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-06 00:57:04.121268 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.121272 | orchestrator | 2025-05-06 00:57:04.121277 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-06 00:57:04.121282 | orchestrator | Tuesday 06 May 2025 00:54:02 +0000 (0:00:00.607) 0:09:32.258 *********** 2025-05-06 00:57:04.121287 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.121292 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.121296 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.121304 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.121309 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.121314 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.121318 | orchestrator | 2025-05-06 00:57:04.121323 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-06 00:57:04.121328 | orchestrator | Tuesday 06 May 2025 00:54:02 +0000 (0:00:00.505) 0:09:32.764 *********** 2025-05-06 00:57:04.121333 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-06 00:57:04.121338 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-06 00:57:04.121343 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.121348 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-06 00:57:04.121353 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.121358 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.121362 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-06 00:57:04.121367 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.121372 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-06 00:57:04.121377 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.121382 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-06 00:57:04.121387 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.121392 | orchestrator | 2025-05-06 00:57:04.121396 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-06 00:57:04.121401 | orchestrator | Tuesday 06 May 2025 00:54:03 +0000 (0:00:00.993) 0:09:33.757 *********** 2025-05-06 00:57:04.121406 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.121411 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.121416 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.121421 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.121425 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.121430 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.121435 | orchestrator | 2025-05-06 00:57:04.121440 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-06 00:57:04.121445 | orchestrator | Tuesday 06 May 2025 00:54:04 +0000 (0:00:00.572) 0:09:34.330 *********** 2025-05-06 00:57:04.121452 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.121457 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.121462 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.121467 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.121471 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.121476 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.121481 | orchestrator | 2025-05-06 00:57:04.121486 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-06 00:57:04.121491 | orchestrator | Tuesday 06 May 2025 00:54:04 +0000 (0:00:00.723) 0:09:35.053 *********** 2025-05-06 00:57:04.121496 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-06 00:57:04.121501 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.121505 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-06 00:57:04.121510 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.121515 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-06 00:57:04.121520 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-06 00:57:04.121525 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.121530 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.121534 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-06 00:57:04.121549 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.121554 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-06 00:57:04.121559 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.121564 | orchestrator | 2025-05-06 00:57:04.121569 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-06 00:57:04.121573 | orchestrator | Tuesday 06 May 2025 00:54:05 +0000 (0:00:00.596) 0:09:35.650 *********** 2025-05-06 00:57:04.121578 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.121583 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.121588 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.121595 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-06 00:57:04.121600 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.121605 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-06 00:57:04.121610 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.121615 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-06 00:57:04.121620 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.121624 | orchestrator | 2025-05-06 00:57:04.121629 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-06 00:57:04.121634 | orchestrator | Tuesday 06 May 2025 00:54:06 +0000 (0:00:00.744) 0:09:36.395 *********** 2025-05-06 00:57:04.121639 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-06 00:57:04.121644 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-06 00:57:04.121661 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-06 00:57:04.121666 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-06 00:57:04.121671 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-06 00:57:04.121676 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-06 00:57:04.121681 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.121686 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-06 00:57:04.121690 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.121695 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-06 00:57:04.121700 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-06 00:57:04.121705 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.121710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.121718 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.121723 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.121727 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-06 00:57:04.121732 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-06 00:57:04.121737 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-06 00:57:04.121742 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.121746 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.121751 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-06 00:57:04.121758 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-06 00:57:04.121763 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-06 00:57:04.121768 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.121773 | orchestrator | 2025-05-06 00:57:04.121778 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-06 00:57:04.121783 | orchestrator | Tuesday 06 May 2025 00:54:07 +0000 (0:00:01.241) 0:09:37.636 *********** 2025-05-06 00:57:04.121788 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.121792 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.121797 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.121802 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.121807 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.121812 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.121816 | orchestrator | 2025-05-06 00:57:04.121821 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-06 00:57:04.121826 | orchestrator | Tuesday 06 May 2025 00:54:08 +0000 (0:00:01.166) 0:09:38.803 *********** 2025-05-06 00:57:04.121831 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.121836 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.121840 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.121845 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-06 00:57:04.121850 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.121855 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-06 00:57:04.121860 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.121864 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-06 00:57:04.121869 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.121874 | orchestrator | 2025-05-06 00:57:04.121879 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-06 00:57:04.121884 | orchestrator | Tuesday 06 May 2025 00:54:09 +0000 (0:00:01.286) 0:09:40.089 *********** 2025-05-06 00:57:04.121889 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.121925 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.121930 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.121935 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.121940 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.121945 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.121950 | orchestrator | 2025-05-06 00:57:04.121955 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-06 00:57:04.121959 | orchestrator | Tuesday 06 May 2025 00:54:10 +0000 (0:00:01.054) 0:09:41.143 *********** 2025-05-06 00:57:04.121964 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:04.121969 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:04.121977 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:04.121982 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.121986 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.121991 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.121996 | orchestrator | 2025-05-06 00:57:04.122001 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-05-06 00:57:04.122009 | orchestrator | Tuesday 06 May 2025 00:54:12 +0000 (0:00:01.153) 0:09:42.297 *********** 2025-05-06 00:57:04.122051 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.122057 | orchestrator | 2025-05-06 00:57:04.122062 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-05-06 00:57:04.122067 | orchestrator | Tuesday 06 May 2025 00:54:15 +0000 (0:00:03.313) 0:09:45.610 *********** 2025-05-06 00:57:04.122072 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.122077 | orchestrator | 2025-05-06 00:57:04.122082 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-05-06 00:57:04.122086 | orchestrator | Tuesday 06 May 2025 00:54:17 +0000 (0:00:01.627) 0:09:47.238 *********** 2025-05-06 00:57:04.122091 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.122096 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.122101 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.122106 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.122110 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.122115 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.122120 | orchestrator | 2025-05-06 00:57:04.122125 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-05-06 00:57:04.122130 | orchestrator | Tuesday 06 May 2025 00:54:18 +0000 (0:00:01.554) 0:09:48.792 *********** 2025-05-06 00:57:04.122135 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.122140 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.122144 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.122149 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.122154 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.122159 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.122163 | orchestrator | 2025-05-06 00:57:04.122168 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-05-06 00:57:04.122173 | orchestrator | Tuesday 06 May 2025 00:54:19 +0000 (0:00:01.077) 0:09:49.870 *********** 2025-05-06 00:57:04.122178 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.122185 | orchestrator | 2025-05-06 00:57:04.122190 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-05-06 00:57:04.122194 | orchestrator | Tuesday 06 May 2025 00:54:21 +0000 (0:00:01.679) 0:09:51.550 *********** 2025-05-06 00:57:04.122199 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.122204 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.122209 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.122213 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.122218 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.122223 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.122227 | orchestrator | 2025-05-06 00:57:04.122232 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-05-06 00:57:04.122237 | orchestrator | Tuesday 06 May 2025 00:54:23 +0000 (0:00:02.113) 0:09:53.663 *********** 2025-05-06 00:57:04.122242 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.122247 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.122251 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.122256 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.122261 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.122265 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.122270 | orchestrator | 2025-05-06 00:57:04.122275 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-05-06 00:57:04.122280 | orchestrator | Tuesday 06 May 2025 00:54:27 +0000 (0:00:03.935) 0:09:57.599 *********** 2025-05-06 00:57:04.122285 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.122290 | orchestrator | 2025-05-06 00:57:04.122295 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-05-06 00:57:04.122299 | orchestrator | Tuesday 06 May 2025 00:54:28 +0000 (0:00:01.264) 0:09:58.864 *********** 2025-05-06 00:57:04.122307 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.122312 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.122317 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.122322 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.122327 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.122331 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.122336 | orchestrator | 2025-05-06 00:57:04.122341 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-05-06 00:57:04.122346 | orchestrator | Tuesday 06 May 2025 00:54:29 +0000 (0:00:00.642) 0:09:59.506 *********** 2025-05-06 00:57:04.122351 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:04.122356 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:04.122360 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.122365 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:04.122370 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.122375 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.122379 | orchestrator | 2025-05-06 00:57:04.122384 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-05-06 00:57:04.122389 | orchestrator | Tuesday 06 May 2025 00:54:32 +0000 (0:00:02.996) 0:10:02.502 *********** 2025-05-06 00:57:04.122394 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:04.122402 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:04.122407 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:04.122412 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.122416 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.122421 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.122426 | orchestrator | 2025-05-06 00:57:04.122431 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-06 00:57:04.122436 | orchestrator | 2025-05-06 00:57:04.122441 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-06 00:57:04.122446 | orchestrator | Tuesday 06 May 2025 00:54:34 +0000 (0:00:02.445) 0:10:04.948 *********** 2025-05-06 00:57:04.122455 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.122463 | orchestrator | 2025-05-06 00:57:04.122468 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-06 00:57:04.122473 | orchestrator | Tuesday 06 May 2025 00:54:35 +0000 (0:00:00.734) 0:10:05.683 *********** 2025-05-06 00:57:04.122478 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.122483 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.122487 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.122492 | orchestrator | 2025-05-06 00:57:04.122497 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-06 00:57:04.122502 | orchestrator | Tuesday 06 May 2025 00:54:35 +0000 (0:00:00.335) 0:10:06.018 *********** 2025-05-06 00:57:04.122507 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.122512 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.122516 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.122521 | orchestrator | 2025-05-06 00:57:04.122526 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-06 00:57:04.122531 | orchestrator | Tuesday 06 May 2025 00:54:36 +0000 (0:00:00.685) 0:10:06.704 *********** 2025-05-06 00:57:04.122536 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.122540 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.122545 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.122550 | orchestrator | 2025-05-06 00:57:04.122555 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-06 00:57:04.122563 | orchestrator | Tuesday 06 May 2025 00:54:37 +0000 (0:00:00.659) 0:10:07.363 *********** 2025-05-06 00:57:04.122568 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.122573 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.122578 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.122583 | orchestrator | 2025-05-06 00:57:04.122588 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-06 00:57:04.122592 | orchestrator | Tuesday 06 May 2025 00:54:38 +0000 (0:00:01.010) 0:10:08.373 *********** 2025-05-06 00:57:04.122601 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.122605 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.122610 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.122615 | orchestrator | 2025-05-06 00:57:04.122620 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-06 00:57:04.122625 | orchestrator | Tuesday 06 May 2025 00:54:38 +0000 (0:00:00.290) 0:10:08.664 *********** 2025-05-06 00:57:04.122630 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.122635 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.122639 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.122644 | orchestrator | 2025-05-06 00:57:04.122661 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-06 00:57:04.122666 | orchestrator | Tuesday 06 May 2025 00:54:38 +0000 (0:00:00.310) 0:10:08.974 *********** 2025-05-06 00:57:04.122671 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.122676 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.122680 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.122685 | orchestrator | 2025-05-06 00:57:04.122690 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-06 00:57:04.122695 | orchestrator | Tuesday 06 May 2025 00:54:39 +0000 (0:00:00.547) 0:10:09.522 *********** 2025-05-06 00:57:04.122699 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.122704 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.122709 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.122714 | orchestrator | 2025-05-06 00:57:04.122718 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-06 00:57:04.122723 | orchestrator | Tuesday 06 May 2025 00:54:39 +0000 (0:00:00.367) 0:10:09.890 *********** 2025-05-06 00:57:04.122728 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.122733 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.122738 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.122742 | orchestrator | 2025-05-06 00:57:04.122747 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-06 00:57:04.122752 | orchestrator | Tuesday 06 May 2025 00:54:39 +0000 (0:00:00.324) 0:10:10.214 *********** 2025-05-06 00:57:04.122757 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.122761 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.122766 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.122771 | orchestrator | 2025-05-06 00:57:04.122776 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-06 00:57:04.122780 | orchestrator | Tuesday 06 May 2025 00:54:40 +0000 (0:00:00.340) 0:10:10.555 *********** 2025-05-06 00:57:04.122785 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.122790 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.122795 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.122799 | orchestrator | 2025-05-06 00:57:04.122804 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-06 00:57:04.122809 | orchestrator | Tuesday 06 May 2025 00:54:41 +0000 (0:00:01.008) 0:10:11.564 *********** 2025-05-06 00:57:04.122814 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.122819 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.122823 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.122828 | orchestrator | 2025-05-06 00:57:04.122833 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-06 00:57:04.122838 | orchestrator | Tuesday 06 May 2025 00:54:41 +0000 (0:00:00.339) 0:10:11.904 *********** 2025-05-06 00:57:04.122842 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.122847 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.122852 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.122856 | orchestrator | 2025-05-06 00:57:04.122861 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-06 00:57:04.122866 | orchestrator | Tuesday 06 May 2025 00:54:42 +0000 (0:00:00.332) 0:10:12.236 *********** 2025-05-06 00:57:04.122874 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.122879 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.122884 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.122889 | orchestrator | 2025-05-06 00:57:04.122894 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-06 00:57:04.122898 | orchestrator | Tuesday 06 May 2025 00:54:42 +0000 (0:00:00.340) 0:10:12.577 *********** 2025-05-06 00:57:04.122903 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.122911 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.122915 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.122920 | orchestrator | 2025-05-06 00:57:04.122925 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-06 00:57:04.122930 | orchestrator | Tuesday 06 May 2025 00:54:42 +0000 (0:00:00.544) 0:10:13.122 *********** 2025-05-06 00:57:04.122935 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.122942 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.122947 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.122952 | orchestrator | 2025-05-06 00:57:04.122957 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-06 00:57:04.122961 | orchestrator | Tuesday 06 May 2025 00:54:43 +0000 (0:00:00.313) 0:10:13.435 *********** 2025-05-06 00:57:04.122966 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.122971 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.122976 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.122980 | orchestrator | 2025-05-06 00:57:04.122985 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-06 00:57:04.122990 | orchestrator | Tuesday 06 May 2025 00:54:43 +0000 (0:00:00.296) 0:10:13.731 *********** 2025-05-06 00:57:04.122995 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123000 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123004 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123009 | orchestrator | 2025-05-06 00:57:04.123014 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-06 00:57:04.123018 | orchestrator | Tuesday 06 May 2025 00:54:43 +0000 (0:00:00.295) 0:10:14.027 *********** 2025-05-06 00:57:04.123023 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123028 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123033 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123037 | orchestrator | 2025-05-06 00:57:04.123042 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-06 00:57:04.123049 | orchestrator | Tuesday 06 May 2025 00:54:44 +0000 (0:00:00.606) 0:10:14.633 *********** 2025-05-06 00:57:04.123054 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.123059 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.123064 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.123071 | orchestrator | 2025-05-06 00:57:04.123080 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-06 00:57:04.123089 | orchestrator | Tuesday 06 May 2025 00:54:44 +0000 (0:00:00.363) 0:10:14.996 *********** 2025-05-06 00:57:04.123098 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123106 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123115 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123124 | orchestrator | 2025-05-06 00:57:04.123132 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-06 00:57:04.123142 | orchestrator | Tuesday 06 May 2025 00:54:45 +0000 (0:00:00.360) 0:10:15.357 *********** 2025-05-06 00:57:04.123147 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123152 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123158 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123166 | orchestrator | 2025-05-06 00:57:04.123174 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-06 00:57:04.123182 | orchestrator | Tuesday 06 May 2025 00:54:45 +0000 (0:00:00.398) 0:10:15.756 *********** 2025-05-06 00:57:04.123189 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123201 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123208 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123215 | orchestrator | 2025-05-06 00:57:04.123221 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-06 00:57:04.123228 | orchestrator | Tuesday 06 May 2025 00:54:46 +0000 (0:00:00.554) 0:10:16.310 *********** 2025-05-06 00:57:04.123235 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123242 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123249 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123257 | orchestrator | 2025-05-06 00:57:04.123264 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-06 00:57:04.123271 | orchestrator | Tuesday 06 May 2025 00:54:46 +0000 (0:00:00.333) 0:10:16.644 *********** 2025-05-06 00:57:04.123278 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123285 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123292 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123299 | orchestrator | 2025-05-06 00:57:04.123307 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-06 00:57:04.123314 | orchestrator | Tuesday 06 May 2025 00:54:46 +0000 (0:00:00.343) 0:10:16.988 *********** 2025-05-06 00:57:04.123320 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123329 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123336 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123344 | orchestrator | 2025-05-06 00:57:04.123352 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-06 00:57:04.123360 | orchestrator | Tuesday 06 May 2025 00:54:47 +0000 (0:00:00.337) 0:10:17.325 *********** 2025-05-06 00:57:04.123370 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123375 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123380 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123385 | orchestrator | 2025-05-06 00:57:04.123390 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-06 00:57:04.123396 | orchestrator | Tuesday 06 May 2025 00:54:47 +0000 (0:00:00.849) 0:10:18.175 *********** 2025-05-06 00:57:04.123401 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123405 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123410 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123415 | orchestrator | 2025-05-06 00:57:04.123420 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-06 00:57:04.123425 | orchestrator | Tuesday 06 May 2025 00:54:48 +0000 (0:00:00.392) 0:10:18.568 *********** 2025-05-06 00:57:04.123430 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123435 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123439 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123444 | orchestrator | 2025-05-06 00:57:04.123449 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-06 00:57:04.123458 | orchestrator | Tuesday 06 May 2025 00:54:48 +0000 (0:00:00.337) 0:10:18.905 *********** 2025-05-06 00:57:04.123463 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123468 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123473 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123477 | orchestrator | 2025-05-06 00:57:04.123482 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-06 00:57:04.123487 | orchestrator | Tuesday 06 May 2025 00:54:49 +0000 (0:00:00.343) 0:10:19.249 *********** 2025-05-06 00:57:04.123492 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123497 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123502 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123507 | orchestrator | 2025-05-06 00:57:04.123511 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-06 00:57:04.123516 | orchestrator | Tuesday 06 May 2025 00:54:49 +0000 (0:00:00.641) 0:10:19.891 *********** 2025-05-06 00:57:04.123521 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123530 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123535 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123539 | orchestrator | 2025-05-06 00:57:04.123544 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-06 00:57:04.123549 | orchestrator | Tuesday 06 May 2025 00:54:50 +0000 (0:00:00.359) 0:10:20.251 *********** 2025-05-06 00:57:04.123554 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-06 00:57:04.123559 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-06 00:57:04.123564 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-06 00:57:04.123568 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-06 00:57:04.123573 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123581 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123586 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-06 00:57:04.123593 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-06 00:57:04.123598 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123603 | orchestrator | 2025-05-06 00:57:04.123608 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-06 00:57:04.123613 | orchestrator | Tuesday 06 May 2025 00:54:50 +0000 (0:00:00.464) 0:10:20.715 *********** 2025-05-06 00:57:04.123617 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-06 00:57:04.123622 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-06 00:57:04.123627 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-06 00:57:04.123632 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123637 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-06 00:57:04.123642 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123658 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-06 00:57:04.123663 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-06 00:57:04.123668 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123673 | orchestrator | 2025-05-06 00:57:04.123678 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-06 00:57:04.123683 | orchestrator | Tuesday 06 May 2025 00:54:50 +0000 (0:00:00.336) 0:10:21.052 *********** 2025-05-06 00:57:04.123688 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123692 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123697 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123702 | orchestrator | 2025-05-06 00:57:04.123707 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-06 00:57:04.123711 | orchestrator | Tuesday 06 May 2025 00:54:51 +0000 (0:00:00.644) 0:10:21.697 *********** 2025-05-06 00:57:04.123716 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123721 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123726 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123730 | orchestrator | 2025-05-06 00:57:04.123735 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-06 00:57:04.123741 | orchestrator | Tuesday 06 May 2025 00:54:51 +0000 (0:00:00.359) 0:10:22.056 *********** 2025-05-06 00:57:04.123746 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123750 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123755 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123760 | orchestrator | 2025-05-06 00:57:04.123765 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-06 00:57:04.123770 | orchestrator | Tuesday 06 May 2025 00:54:52 +0000 (0:00:00.342) 0:10:22.399 *********** 2025-05-06 00:57:04.123774 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123779 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123784 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123789 | orchestrator | 2025-05-06 00:57:04.123796 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-06 00:57:04.123808 | orchestrator | Tuesday 06 May 2025 00:54:52 +0000 (0:00:00.316) 0:10:22.716 *********** 2025-05-06 00:57:04.123813 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123817 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123822 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123827 | orchestrator | 2025-05-06 00:57:04.123832 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-06 00:57:04.123837 | orchestrator | Tuesday 06 May 2025 00:54:53 +0000 (0:00:00.585) 0:10:23.302 *********** 2025-05-06 00:57:04.123841 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123846 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123851 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123856 | orchestrator | 2025-05-06 00:57:04.123860 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-06 00:57:04.123865 | orchestrator | Tuesday 06 May 2025 00:54:53 +0000 (0:00:00.410) 0:10:23.712 *********** 2025-05-06 00:57:04.123870 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.123875 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.123880 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.123884 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123889 | orchestrator | 2025-05-06 00:57:04.123897 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-06 00:57:04.123902 | orchestrator | Tuesday 06 May 2025 00:54:54 +0000 (0:00:00.563) 0:10:24.276 *********** 2025-05-06 00:57:04.123906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.123911 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.123916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.123921 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123925 | orchestrator | 2025-05-06 00:57:04.123930 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-06 00:57:04.123935 | orchestrator | Tuesday 06 May 2025 00:54:54 +0000 (0:00:00.528) 0:10:24.805 *********** 2025-05-06 00:57:04.123940 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.123945 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.123950 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.123954 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123959 | orchestrator | 2025-05-06 00:57:04.123964 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-06 00:57:04.123969 | orchestrator | Tuesday 06 May 2025 00:54:54 +0000 (0:00:00.401) 0:10:25.206 *********** 2025-05-06 00:57:04.123974 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.123978 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.123983 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.123988 | orchestrator | 2025-05-06 00:57:04.123993 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-06 00:57:04.123998 | orchestrator | Tuesday 06 May 2025 00:54:55 +0000 (0:00:00.292) 0:10:25.499 *********** 2025-05-06 00:57:04.124002 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-06 00:57:04.124007 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.124012 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-06 00:57:04.124017 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.124021 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-06 00:57:04.124026 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.124031 | orchestrator | 2025-05-06 00:57:04.124036 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-06 00:57:04.124041 | orchestrator | Tuesday 06 May 2025 00:54:55 +0000 (0:00:00.546) 0:10:26.046 *********** 2025-05-06 00:57:04.124045 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.124050 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.124058 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.124063 | orchestrator | 2025-05-06 00:57:04.124068 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-06 00:57:04.124073 | orchestrator | Tuesday 06 May 2025 00:54:56 +0000 (0:00:00.271) 0:10:26.317 *********** 2025-05-06 00:57:04.124078 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.124083 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.124087 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.124092 | orchestrator | 2025-05-06 00:57:04.124097 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-06 00:57:04.124102 | orchestrator | Tuesday 06 May 2025 00:54:56 +0000 (0:00:00.279) 0:10:26.597 *********** 2025-05-06 00:57:04.124107 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-06 00:57:04.124112 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.124116 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-06 00:57:04.124121 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.124126 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-06 00:57:04.124131 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.124135 | orchestrator | 2025-05-06 00:57:04.124140 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-06 00:57:04.124145 | orchestrator | Tuesday 06 May 2025 00:54:56 +0000 (0:00:00.368) 0:10:26.965 *********** 2025-05-06 00:57:04.124150 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-06 00:57:04.124155 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.124160 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-06 00:57:04.124165 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.124170 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-06 00:57:04.124175 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.124179 | orchestrator | 2025-05-06 00:57:04.124184 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-06 00:57:04.124189 | orchestrator | Tuesday 06 May 2025 00:54:57 +0000 (0:00:00.442) 0:10:27.408 *********** 2025-05-06 00:57:04.124194 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.124199 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.124203 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.124208 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.124213 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-06 00:57:04.124218 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-06 00:57:04.124222 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-06 00:57:04.124227 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.124232 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-06 00:57:04.124237 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-06 00:57:04.124242 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-06 00:57:04.124246 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.124251 | orchestrator | 2025-05-06 00:57:04.124258 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-06 00:57:04.124263 | orchestrator | Tuesday 06 May 2025 00:54:57 +0000 (0:00:00.516) 0:10:27.924 *********** 2025-05-06 00:57:04.124268 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.124273 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.124278 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.124282 | orchestrator | 2025-05-06 00:57:04.124287 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-06 00:57:04.124292 | orchestrator | Tuesday 06 May 2025 00:54:58 +0000 (0:00:00.597) 0:10:28.522 *********** 2025-05-06 00:57:04.124301 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-06 00:57:04.124305 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.124310 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-06 00:57:04.124315 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.124320 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-06 00:57:04.124325 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.124330 | orchestrator | 2025-05-06 00:57:04.124335 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-06 00:57:04.124340 | orchestrator | Tuesday 06 May 2025 00:54:58 +0000 (0:00:00.506) 0:10:29.028 *********** 2025-05-06 00:57:04.124345 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.124350 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.124355 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.124360 | orchestrator | 2025-05-06 00:57:04.124364 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-06 00:57:04.124369 | orchestrator | Tuesday 06 May 2025 00:54:59 +0000 (0:00:00.853) 0:10:29.881 *********** 2025-05-06 00:57:04.124374 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.124379 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.124384 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.124388 | orchestrator | 2025-05-06 00:57:04.124396 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-05-06 00:57:04.124401 | orchestrator | Tuesday 06 May 2025 00:55:00 +0000 (0:00:00.516) 0:10:30.398 *********** 2025-05-06 00:57:04.124405 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.124410 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.124415 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-06 00:57:04.124420 | orchestrator | 2025-05-06 00:57:04.124425 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-05-06 00:57:04.124429 | orchestrator | Tuesday 06 May 2025 00:55:00 +0000 (0:00:00.377) 0:10:30.776 *********** 2025-05-06 00:57:04.124434 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-06 00:57:04.124439 | orchestrator | 2025-05-06 00:57:04.124444 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-05-06 00:57:04.124449 | orchestrator | Tuesday 06 May 2025 00:55:02 +0000 (0:00:01.991) 0:10:32.767 *********** 2025-05-06 00:57:04.124455 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-06 00:57:04.124462 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.124467 | orchestrator | 2025-05-06 00:57:04.124471 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-05-06 00:57:04.124476 | orchestrator | Tuesday 06 May 2025 00:55:02 +0000 (0:00:00.366) 0:10:33.134 *********** 2025-05-06 00:57:04.124482 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-06 00:57:04.124488 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-06 00:57:04.124493 | orchestrator | 2025-05-06 00:57:04.124498 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-05-06 00:57:04.124503 | orchestrator | Tuesday 06 May 2025 00:55:10 +0000 (0:00:07.358) 0:10:40.492 *********** 2025-05-06 00:57:04.124507 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-06 00:57:04.124515 | orchestrator | 2025-05-06 00:57:04.124520 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-05-06 00:57:04.124525 | orchestrator | Tuesday 06 May 2025 00:55:13 +0000 (0:00:02.886) 0:10:43.379 *********** 2025-05-06 00:57:04.124529 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.124534 | orchestrator | 2025-05-06 00:57:04.124539 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-05-06 00:57:04.124544 | orchestrator | Tuesday 06 May 2025 00:55:13 +0000 (0:00:00.748) 0:10:44.127 *********** 2025-05-06 00:57:04.124549 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-06 00:57:04.124553 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-06 00:57:04.124558 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-06 00:57:04.124563 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-06 00:57:04.124568 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-06 00:57:04.124573 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-06 00:57:04.124577 | orchestrator | 2025-05-06 00:57:04.124585 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-05-06 00:57:04.124590 | orchestrator | Tuesday 06 May 2025 00:55:14 +0000 (0:00:01.092) 0:10:45.220 *********** 2025-05-06 00:57:04.124595 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:57:04.124600 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-06 00:57:04.124604 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-06 00:57:04.124609 | orchestrator | 2025-05-06 00:57:04.124614 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-05-06 00:57:04.124619 | orchestrator | Tuesday 06 May 2025 00:55:16 +0000 (0:00:01.814) 0:10:47.035 *********** 2025-05-06 00:57:04.124623 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-06 00:57:04.124628 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-06 00:57:04.124633 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-06 00:57:04.124638 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.124643 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-06 00:57:04.124673 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.124678 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-06 00:57:04.124683 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-06 00:57:04.124688 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.124693 | orchestrator | 2025-05-06 00:57:04.124698 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-05-06 00:57:04.124703 | orchestrator | Tuesday 06 May 2025 00:55:17 +0000 (0:00:01.103) 0:10:48.139 *********** 2025-05-06 00:57:04.124707 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.124712 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.124717 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.124722 | orchestrator | 2025-05-06 00:57:04.124727 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-05-06 00:57:04.124731 | orchestrator | Tuesday 06 May 2025 00:55:18 +0000 (0:00:00.605) 0:10:48.745 *********** 2025-05-06 00:57:04.124736 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.124741 | orchestrator | 2025-05-06 00:57:04.124746 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-05-06 00:57:04.124751 | orchestrator | Tuesday 06 May 2025 00:55:19 +0000 (0:00:00.614) 0:10:49.359 *********** 2025-05-06 00:57:04.124756 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.124760 | orchestrator | 2025-05-06 00:57:04.124765 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-05-06 00:57:04.124777 | orchestrator | Tuesday 06 May 2025 00:55:19 +0000 (0:00:00.841) 0:10:50.201 *********** 2025-05-06 00:57:04.124782 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.124787 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.124792 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.124796 | orchestrator | 2025-05-06 00:57:04.124801 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-05-06 00:57:04.124809 | orchestrator | Tuesday 06 May 2025 00:55:21 +0000 (0:00:01.198) 0:10:51.400 *********** 2025-05-06 00:57:04.124814 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.124819 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.124823 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.124828 | orchestrator | 2025-05-06 00:57:04.124833 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-05-06 00:57:04.124838 | orchestrator | Tuesday 06 May 2025 00:55:22 +0000 (0:00:01.134) 0:10:52.534 *********** 2025-05-06 00:57:04.124843 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.124847 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.124852 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.124857 | orchestrator | 2025-05-06 00:57:04.124862 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-05-06 00:57:04.124866 | orchestrator | Tuesday 06 May 2025 00:55:24 +0000 (0:00:01.711) 0:10:54.246 *********** 2025-05-06 00:57:04.124871 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.124876 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.124881 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.124886 | orchestrator | 2025-05-06 00:57:04.124890 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-05-06 00:57:04.124895 | orchestrator | Tuesday 06 May 2025 00:55:25 +0000 (0:00:01.839) 0:10:56.085 *********** 2025-05-06 00:57:04.124900 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-05-06 00:57:04.124905 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-05-06 00:57:04.124910 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-05-06 00:57:04.124914 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.124919 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.124924 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.124929 | orchestrator | 2025-05-06 00:57:04.124934 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-06 00:57:04.124939 | orchestrator | Tuesday 06 May 2025 00:55:42 +0000 (0:00:17.009) 0:11:13.094 *********** 2025-05-06 00:57:04.124943 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.124948 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.124953 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.124958 | orchestrator | 2025-05-06 00:57:04.124962 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-06 00:57:04.124967 | orchestrator | Tuesday 06 May 2025 00:55:43 +0000 (0:00:00.664) 0:11:13.759 *********** 2025-05-06 00:57:04.124972 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.124977 | orchestrator | 2025-05-06 00:57:04.124982 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-05-06 00:57:04.124989 | orchestrator | Tuesday 06 May 2025 00:55:44 +0000 (0:00:00.970) 0:11:14.729 *********** 2025-05-06 00:57:04.124994 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.124999 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.125004 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.125009 | orchestrator | 2025-05-06 00:57:04.125013 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-06 00:57:04.125018 | orchestrator | Tuesday 06 May 2025 00:55:44 +0000 (0:00:00.379) 0:11:15.108 *********** 2025-05-06 00:57:04.125023 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.125031 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.125036 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.125041 | orchestrator | 2025-05-06 00:57:04.125045 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-05-06 00:57:04.125050 | orchestrator | Tuesday 06 May 2025 00:55:46 +0000 (0:00:01.215) 0:11:16.324 *********** 2025-05-06 00:57:04.125055 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.125060 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.125065 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.125070 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.125075 | orchestrator | 2025-05-06 00:57:04.125079 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-06 00:57:04.125084 | orchestrator | Tuesday 06 May 2025 00:55:47 +0000 (0:00:01.397) 0:11:17.722 *********** 2025-05-06 00:57:04.125089 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.125094 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.125099 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.125103 | orchestrator | 2025-05-06 00:57:04.125108 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-06 00:57:04.125113 | orchestrator | Tuesday 06 May 2025 00:55:47 +0000 (0:00:00.347) 0:11:18.069 *********** 2025-05-06 00:57:04.125118 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.125123 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.125128 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.125132 | orchestrator | 2025-05-06 00:57:04.125137 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-06 00:57:04.125142 | orchestrator | 2025-05-06 00:57:04.125147 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-06 00:57:04.125152 | orchestrator | Tuesday 06 May 2025 00:55:49 +0000 (0:00:01.979) 0:11:20.048 *********** 2025-05-06 00:57:04.125156 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.125164 | orchestrator | 2025-05-06 00:57:04.125169 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-06 00:57:04.125174 | orchestrator | Tuesday 06 May 2025 00:55:50 +0000 (0:00:00.879) 0:11:20.928 *********** 2025-05-06 00:57:04.125178 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.125183 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.125188 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.125195 | orchestrator | 2025-05-06 00:57:04.125200 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-06 00:57:04.125205 | orchestrator | Tuesday 06 May 2025 00:55:51 +0000 (0:00:00.359) 0:11:21.288 *********** 2025-05-06 00:57:04.125210 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.125215 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.125220 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.125224 | orchestrator | 2025-05-06 00:57:04.125229 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-06 00:57:04.125236 | orchestrator | Tuesday 06 May 2025 00:55:51 +0000 (0:00:00.727) 0:11:22.016 *********** 2025-05-06 00:57:04.125241 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.125246 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.125251 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.125256 | orchestrator | 2025-05-06 00:57:04.125261 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-06 00:57:04.125266 | orchestrator | Tuesday 06 May 2025 00:55:52 +0000 (0:00:00.962) 0:11:22.979 *********** 2025-05-06 00:57:04.125270 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.125275 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.125280 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.125285 | orchestrator | 2025-05-06 00:57:04.125290 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-06 00:57:04.125294 | orchestrator | Tuesday 06 May 2025 00:55:53 +0000 (0:00:00.720) 0:11:23.699 *********** 2025-05-06 00:57:04.125302 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.125307 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.125312 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.125317 | orchestrator | 2025-05-06 00:57:04.125322 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-06 00:57:04.125327 | orchestrator | Tuesday 06 May 2025 00:55:53 +0000 (0:00:00.349) 0:11:24.048 *********** 2025-05-06 00:57:04.125332 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.125336 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.125341 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.125346 | orchestrator | 2025-05-06 00:57:04.125351 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-06 00:57:04.125356 | orchestrator | Tuesday 06 May 2025 00:55:54 +0000 (0:00:00.322) 0:11:24.371 *********** 2025-05-06 00:57:04.125361 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.125366 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.125370 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.125375 | orchestrator | 2025-05-06 00:57:04.125380 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-06 00:57:04.125385 | orchestrator | Tuesday 06 May 2025 00:55:54 +0000 (0:00:00.708) 0:11:25.079 *********** 2025-05-06 00:57:04.125390 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.125395 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.125400 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.125404 | orchestrator | 2025-05-06 00:57:04.125409 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-06 00:57:04.125414 | orchestrator | Tuesday 06 May 2025 00:55:55 +0000 (0:00:00.342) 0:11:25.421 *********** 2025-05-06 00:57:04.125419 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.125427 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.125432 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.125436 | orchestrator | 2025-05-06 00:57:04.125441 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-06 00:57:04.125446 | orchestrator | Tuesday 06 May 2025 00:55:55 +0000 (0:00:00.365) 0:11:25.787 *********** 2025-05-06 00:57:04.125451 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.125456 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.125461 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.125466 | orchestrator | 2025-05-06 00:57:04.125470 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-06 00:57:04.125475 | orchestrator | Tuesday 06 May 2025 00:55:55 +0000 (0:00:00.351) 0:11:26.138 *********** 2025-05-06 00:57:04.125480 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.125485 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.125490 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.125495 | orchestrator | 2025-05-06 00:57:04.125499 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-06 00:57:04.125504 | orchestrator | Tuesday 06 May 2025 00:55:56 +0000 (0:00:00.955) 0:11:27.094 *********** 2025-05-06 00:57:04.125509 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.125514 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.125519 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.125523 | orchestrator | 2025-05-06 00:57:04.125528 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-06 00:57:04.125533 | orchestrator | Tuesday 06 May 2025 00:55:57 +0000 (0:00:00.303) 0:11:27.398 *********** 2025-05-06 00:57:04.125538 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.125543 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.125548 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.125553 | orchestrator | 2025-05-06 00:57:04.125557 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-06 00:57:04.125562 | orchestrator | Tuesday 06 May 2025 00:55:57 +0000 (0:00:00.301) 0:11:27.699 *********** 2025-05-06 00:57:04.125570 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.125575 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.125580 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.125584 | orchestrator | 2025-05-06 00:57:04.125589 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-06 00:57:04.125594 | orchestrator | Tuesday 06 May 2025 00:55:57 +0000 (0:00:00.338) 0:11:28.037 *********** 2025-05-06 00:57:04.125599 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.125604 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.125608 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.125613 | orchestrator | 2025-05-06 00:57:04.125618 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-06 00:57:04.125623 | orchestrator | Tuesday 06 May 2025 00:55:58 +0000 (0:00:00.594) 0:11:28.632 *********** 2025-05-06 00:57:04.125628 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.125632 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.125637 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.125642 | orchestrator | 2025-05-06 00:57:04.125660 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-06 00:57:04.125665 | orchestrator | Tuesday 06 May 2025 00:55:58 +0000 (0:00:00.322) 0:11:28.955 *********** 2025-05-06 00:57:04.125670 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.125675 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.125680 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.125685 | orchestrator | 2025-05-06 00:57:04.125690 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-06 00:57:04.125695 | orchestrator | Tuesday 06 May 2025 00:55:59 +0000 (0:00:00.308) 0:11:29.263 *********** 2025-05-06 00:57:04.125699 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.125704 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.125709 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.125717 | orchestrator | 2025-05-06 00:57:04.125722 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-06 00:57:04.125729 | orchestrator | Tuesday 06 May 2025 00:55:59 +0000 (0:00:00.286) 0:11:29.549 *********** 2025-05-06 00:57:04.125734 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.125738 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.125743 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.125748 | orchestrator | 2025-05-06 00:57:04.125753 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-06 00:57:04.125758 | orchestrator | Tuesday 06 May 2025 00:55:59 +0000 (0:00:00.541) 0:11:30.091 *********** 2025-05-06 00:57:04.125763 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.125767 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.125772 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.125777 | orchestrator | 2025-05-06 00:57:04.125782 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-06 00:57:04.125787 | orchestrator | Tuesday 06 May 2025 00:56:00 +0000 (0:00:00.350) 0:11:30.441 *********** 2025-05-06 00:57:04.125791 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.125796 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.125801 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.125806 | orchestrator | 2025-05-06 00:57:04.125811 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-06 00:57:04.125816 | orchestrator | Tuesday 06 May 2025 00:56:00 +0000 (0:00:00.350) 0:11:30.792 *********** 2025-05-06 00:57:04.125821 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.125825 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.125830 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.125835 | orchestrator | 2025-05-06 00:57:04.125840 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-06 00:57:04.125845 | orchestrator | Tuesday 06 May 2025 00:56:00 +0000 (0:00:00.323) 0:11:31.116 *********** 2025-05-06 00:57:04.125850 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.125858 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.125863 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.125868 | orchestrator | 2025-05-06 00:57:04.125872 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-06 00:57:04.125877 | orchestrator | Tuesday 06 May 2025 00:56:01 +0000 (0:00:00.715) 0:11:31.832 *********** 2025-05-06 00:57:04.125882 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.125887 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.125894 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.125899 | orchestrator | 2025-05-06 00:57:04.125904 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-06 00:57:04.125909 | orchestrator | Tuesday 06 May 2025 00:56:01 +0000 (0:00:00.340) 0:11:32.172 *********** 2025-05-06 00:57:04.125913 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.125918 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.125923 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.125928 | orchestrator | 2025-05-06 00:57:04.125933 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-06 00:57:04.125938 | orchestrator | Tuesday 06 May 2025 00:56:02 +0000 (0:00:00.331) 0:11:32.504 *********** 2025-05-06 00:57:04.125942 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.125947 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.125952 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.125957 | orchestrator | 2025-05-06 00:57:04.125962 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-06 00:57:04.125967 | orchestrator | Tuesday 06 May 2025 00:56:02 +0000 (0:00:00.302) 0:11:32.807 *********** 2025-05-06 00:57:04.125971 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.125980 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.125985 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.125990 | orchestrator | 2025-05-06 00:57:04.125995 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-06 00:57:04.126000 | orchestrator | Tuesday 06 May 2025 00:56:03 +0000 (0:00:00.579) 0:11:33.387 *********** 2025-05-06 00:57:04.126004 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126009 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126030 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126036 | orchestrator | 2025-05-06 00:57:04.126041 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-06 00:57:04.126046 | orchestrator | Tuesday 06 May 2025 00:56:03 +0000 (0:00:00.323) 0:11:33.710 *********** 2025-05-06 00:57:04.126051 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126056 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126061 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126066 | orchestrator | 2025-05-06 00:57:04.126071 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-06 00:57:04.126075 | orchestrator | Tuesday 06 May 2025 00:56:03 +0000 (0:00:00.298) 0:11:34.009 *********** 2025-05-06 00:57:04.126080 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126085 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126090 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126095 | orchestrator | 2025-05-06 00:57:04.126100 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-06 00:57:04.126105 | orchestrator | Tuesday 06 May 2025 00:56:04 +0000 (0:00:00.328) 0:11:34.338 *********** 2025-05-06 00:57:04.126109 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126114 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126119 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126124 | orchestrator | 2025-05-06 00:57:04.126129 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-06 00:57:04.126134 | orchestrator | Tuesday 06 May 2025 00:56:04 +0000 (0:00:00.575) 0:11:34.913 *********** 2025-05-06 00:57:04.126138 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126146 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126151 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126156 | orchestrator | 2025-05-06 00:57:04.126161 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-06 00:57:04.126166 | orchestrator | Tuesday 06 May 2025 00:56:04 +0000 (0:00:00.309) 0:11:35.223 *********** 2025-05-06 00:57:04.126170 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-06 00:57:04.126175 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-06 00:57:04.126180 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126185 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-06 00:57:04.126190 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-06 00:57:04.126195 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126200 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-06 00:57:04.126205 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-06 00:57:04.126209 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126214 | orchestrator | 2025-05-06 00:57:04.126219 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-06 00:57:04.126224 | orchestrator | Tuesday 06 May 2025 00:56:05 +0000 (0:00:00.390) 0:11:35.613 *********** 2025-05-06 00:57:04.126229 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-06 00:57:04.126236 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-06 00:57:04.126241 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126246 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-06 00:57:04.126250 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-06 00:57:04.126255 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126260 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-06 00:57:04.126265 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-06 00:57:04.126270 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126274 | orchestrator | 2025-05-06 00:57:04.126279 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-06 00:57:04.126284 | orchestrator | Tuesday 06 May 2025 00:56:05 +0000 (0:00:00.344) 0:11:35.957 *********** 2025-05-06 00:57:04.126289 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126293 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126298 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126305 | orchestrator | 2025-05-06 00:57:04.126310 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-06 00:57:04.126315 | orchestrator | Tuesday 06 May 2025 00:56:06 +0000 (0:00:00.609) 0:11:36.567 *********** 2025-05-06 00:57:04.126320 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126327 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126332 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126337 | orchestrator | 2025-05-06 00:57:04.126342 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-06 00:57:04.126347 | orchestrator | Tuesday 06 May 2025 00:56:06 +0000 (0:00:00.327) 0:11:36.894 *********** 2025-05-06 00:57:04.126351 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126356 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126361 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126366 | orchestrator | 2025-05-06 00:57:04.126371 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-06 00:57:04.126376 | orchestrator | Tuesday 06 May 2025 00:56:06 +0000 (0:00:00.307) 0:11:37.201 *********** 2025-05-06 00:57:04.126380 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126385 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126390 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126395 | orchestrator | 2025-05-06 00:57:04.126399 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-06 00:57:04.126409 | orchestrator | Tuesday 06 May 2025 00:56:07 +0000 (0:00:00.314) 0:11:37.516 *********** 2025-05-06 00:57:04.126414 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126419 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126424 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126428 | orchestrator | 2025-05-06 00:57:04.126433 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-06 00:57:04.126438 | orchestrator | Tuesday 06 May 2025 00:56:07 +0000 (0:00:00.578) 0:11:38.094 *********** 2025-05-06 00:57:04.126443 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126447 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126452 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126457 | orchestrator | 2025-05-06 00:57:04.126462 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-06 00:57:04.126467 | orchestrator | Tuesday 06 May 2025 00:56:08 +0000 (0:00:00.323) 0:11:38.418 *********** 2025-05-06 00:57:04.126471 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.126476 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.126481 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.126486 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126490 | orchestrator | 2025-05-06 00:57:04.126495 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-06 00:57:04.126500 | orchestrator | Tuesday 06 May 2025 00:56:08 +0000 (0:00:00.424) 0:11:38.842 *********** 2025-05-06 00:57:04.126505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.126510 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.126515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.126519 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126524 | orchestrator | 2025-05-06 00:57:04.126529 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-06 00:57:04.126534 | orchestrator | Tuesday 06 May 2025 00:56:09 +0000 (0:00:00.412) 0:11:39.255 *********** 2025-05-06 00:57:04.126539 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.126543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.126548 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.126553 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126558 | orchestrator | 2025-05-06 00:57:04.126563 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-06 00:57:04.126568 | orchestrator | Tuesday 06 May 2025 00:56:09 +0000 (0:00:00.424) 0:11:39.680 *********** 2025-05-06 00:57:04.126572 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126577 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126582 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126587 | orchestrator | 2025-05-06 00:57:04.126591 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-06 00:57:04.126596 | orchestrator | Tuesday 06 May 2025 00:56:09 +0000 (0:00:00.360) 0:11:40.040 *********** 2025-05-06 00:57:04.126601 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-06 00:57:04.126606 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126611 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-06 00:57:04.126615 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126620 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-06 00:57:04.126625 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126630 | orchestrator | 2025-05-06 00:57:04.126634 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-06 00:57:04.126639 | orchestrator | Tuesday 06 May 2025 00:56:10 +0000 (0:00:00.926) 0:11:40.967 *********** 2025-05-06 00:57:04.126644 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126673 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126681 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126686 | orchestrator | 2025-05-06 00:57:04.126691 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-06 00:57:04.126696 | orchestrator | Tuesday 06 May 2025 00:56:11 +0000 (0:00:00.325) 0:11:41.292 *********** 2025-05-06 00:57:04.126701 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126705 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126710 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126715 | orchestrator | 2025-05-06 00:57:04.126720 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-06 00:57:04.126725 | orchestrator | Tuesday 06 May 2025 00:56:11 +0000 (0:00:00.369) 0:11:41.662 *********** 2025-05-06 00:57:04.126729 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-06 00:57:04.126734 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126739 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-06 00:57:04.126744 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126749 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-06 00:57:04.126754 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126759 | orchestrator | 2025-05-06 00:57:04.126766 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-06 00:57:04.126771 | orchestrator | Tuesday 06 May 2025 00:56:11 +0000 (0:00:00.454) 0:11:42.117 *********** 2025-05-06 00:57:04.126776 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-06 00:57:04.126784 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126789 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-06 00:57:04.126794 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126798 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-06 00:57:04.126803 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126808 | orchestrator | 2025-05-06 00:57:04.126813 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-06 00:57:04.126818 | orchestrator | Tuesday 06 May 2025 00:56:12 +0000 (0:00:00.592) 0:11:42.709 *********** 2025-05-06 00:57:04.126822 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.126827 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.126832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.126837 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-06 00:57:04.126841 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-06 00:57:04.126846 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-06 00:57:04.126851 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126856 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126860 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-06 00:57:04.126865 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-06 00:57:04.126870 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-06 00:57:04.126875 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126880 | orchestrator | 2025-05-06 00:57:04.126884 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-06 00:57:04.126889 | orchestrator | Tuesday 06 May 2025 00:56:13 +0000 (0:00:00.575) 0:11:43.284 *********** 2025-05-06 00:57:04.126894 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126899 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126903 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126908 | orchestrator | 2025-05-06 00:57:04.126913 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-06 00:57:04.126918 | orchestrator | Tuesday 06 May 2025 00:56:13 +0000 (0:00:00.784) 0:11:44.068 *********** 2025-05-06 00:57:04.126925 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-06 00:57:04.126930 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126935 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-06 00:57:04.126940 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126945 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-06 00:57:04.126949 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126954 | orchestrator | 2025-05-06 00:57:04.126959 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-06 00:57:04.126966 | orchestrator | Tuesday 06 May 2025 00:56:14 +0000 (0:00:00.567) 0:11:44.635 *********** 2025-05-06 00:57:04.126971 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.126976 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.126981 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.126986 | orchestrator | 2025-05-06 00:57:04.126990 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-06 00:57:04.126995 | orchestrator | Tuesday 06 May 2025 00:56:15 +0000 (0:00:00.809) 0:11:45.445 *********** 2025-05-06 00:57:04.127000 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.127007 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.127012 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.127017 | orchestrator | 2025-05-06 00:57:04.127021 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-05-06 00:57:04.127026 | orchestrator | Tuesday 06 May 2025 00:56:15 +0000 (0:00:00.573) 0:11:46.018 *********** 2025-05-06 00:57:04.127031 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.127036 | orchestrator | 2025-05-06 00:57:04.127041 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-05-06 00:57:04.127046 | orchestrator | Tuesday 06 May 2025 00:56:16 +0000 (0:00:00.808) 0:11:46.827 *********** 2025-05-06 00:57:04.127050 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-05-06 00:57:04.127055 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-05-06 00:57:04.127060 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-05-06 00:57:04.127065 | orchestrator | 2025-05-06 00:57:04.127070 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-05-06 00:57:04.127074 | orchestrator | Tuesday 06 May 2025 00:56:17 +0000 (0:00:00.649) 0:11:47.476 *********** 2025-05-06 00:57:04.127079 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:57:04.127084 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-06 00:57:04.127089 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-06 00:57:04.127094 | orchestrator | 2025-05-06 00:57:04.127098 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-05-06 00:57:04.127103 | orchestrator | Tuesday 06 May 2025 00:56:19 +0000 (0:00:01.835) 0:11:49.312 *********** 2025-05-06 00:57:04.127108 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-06 00:57:04.127113 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-06 00:57:04.127120 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.127126 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-06 00:57:04.127131 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-06 00:57:04.127135 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.127140 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-06 00:57:04.127145 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-06 00:57:04.127150 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.127155 | orchestrator | 2025-05-06 00:57:04.127160 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-05-06 00:57:04.127165 | orchestrator | Tuesday 06 May 2025 00:56:20 +0000 (0:00:01.168) 0:11:50.480 *********** 2025-05-06 00:57:04.127169 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.127177 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.127182 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.127187 | orchestrator | 2025-05-06 00:57:04.127192 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-05-06 00:57:04.127196 | orchestrator | Tuesday 06 May 2025 00:56:20 +0000 (0:00:00.534) 0:11:51.014 *********** 2025-05-06 00:57:04.127201 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.127206 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.127211 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.127215 | orchestrator | 2025-05-06 00:57:04.127220 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-05-06 00:57:04.127225 | orchestrator | Tuesday 06 May 2025 00:56:21 +0000 (0:00:00.314) 0:11:51.329 *********** 2025-05-06 00:57:04.127230 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-06 00:57:04.127235 | orchestrator | 2025-05-06 00:57:04.127239 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-05-06 00:57:04.127244 | orchestrator | Tuesday 06 May 2025 00:56:21 +0000 (0:00:00.250) 0:11:51.580 *********** 2025-05-06 00:57:04.127249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-06 00:57:04.127257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-06 00:57:04.127262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-06 00:57:04.127267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-06 00:57:04.127272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-06 00:57:04.127277 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.127281 | orchestrator | 2025-05-06 00:57:04.127286 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-05-06 00:57:04.127291 | orchestrator | Tuesday 06 May 2025 00:56:22 +0000 (0:00:00.835) 0:11:52.415 *********** 2025-05-06 00:57:04.127296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-06 00:57:04.127301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-06 00:57:04.127306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-06 00:57:04.127311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-06 00:57:04.127318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-06 00:57:04.127323 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.127327 | orchestrator | 2025-05-06 00:57:04.127332 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-05-06 00:57:04.127337 | orchestrator | Tuesday 06 May 2025 00:56:23 +0000 (0:00:00.859) 0:11:53.275 *********** 2025-05-06 00:57:04.127342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-06 00:57:04.127347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-06 00:57:04.127352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-06 00:57:04.127356 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-06 00:57:04.127364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-06 00:57:04.127369 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.127374 | orchestrator | 2025-05-06 00:57:04.127379 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-05-06 00:57:04.127384 | orchestrator | Tuesday 06 May 2025 00:56:23 +0000 (0:00:00.628) 0:11:53.904 *********** 2025-05-06 00:57:04.127391 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-06 00:57:04.127397 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-06 00:57:04.127401 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-06 00:57:04.127406 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-06 00:57:04.127411 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-06 00:57:04.127416 | orchestrator | 2025-05-06 00:57:04.127421 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-05-06 00:57:04.127426 | orchestrator | Tuesday 06 May 2025 00:56:49 +0000 (0:00:25.969) 0:12:19.873 *********** 2025-05-06 00:57:04.127430 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.127435 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.127440 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.127445 | orchestrator | 2025-05-06 00:57:04.127450 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-05-06 00:57:04.127454 | orchestrator | Tuesday 06 May 2025 00:56:50 +0000 (0:00:00.480) 0:12:20.354 *********** 2025-05-06 00:57:04.127459 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.127464 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.127469 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.127473 | orchestrator | 2025-05-06 00:57:04.127478 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-05-06 00:57:04.127485 | orchestrator | Tuesday 06 May 2025 00:56:50 +0000 (0:00:00.328) 0:12:20.682 *********** 2025-05-06 00:57:04.127490 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.127495 | orchestrator | 2025-05-06 00:57:04.127500 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-05-06 00:57:04.127505 | orchestrator | Tuesday 06 May 2025 00:56:50 +0000 (0:00:00.535) 0:12:21.217 *********** 2025-05-06 00:57:04.127510 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.127515 | orchestrator | 2025-05-06 00:57:04.127519 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-05-06 00:57:04.127524 | orchestrator | Tuesday 06 May 2025 00:56:51 +0000 (0:00:00.781) 0:12:21.999 *********** 2025-05-06 00:57:04.127529 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.127534 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.127539 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.127543 | orchestrator | 2025-05-06 00:57:04.127548 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-05-06 00:57:04.127553 | orchestrator | Tuesday 06 May 2025 00:56:52 +0000 (0:00:01.192) 0:12:23.192 *********** 2025-05-06 00:57:04.127558 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.127563 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.127570 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.127575 | orchestrator | 2025-05-06 00:57:04.127580 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-05-06 00:57:04.127585 | orchestrator | Tuesday 06 May 2025 00:56:53 +0000 (0:00:01.036) 0:12:24.228 *********** 2025-05-06 00:57:04.127590 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.127595 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.127599 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.127604 | orchestrator | 2025-05-06 00:57:04.127609 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-05-06 00:57:04.127614 | orchestrator | Tuesday 06 May 2025 00:56:55 +0000 (0:00:01.930) 0:12:26.159 *********** 2025-05-06 00:57:04.127618 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-06 00:57:04.127623 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-06 00:57:04.127628 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-06 00:57:04.127633 | orchestrator | 2025-05-06 00:57:04.127638 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-05-06 00:57:04.127642 | orchestrator | Tuesday 06 May 2025 00:56:57 +0000 (0:00:01.674) 0:12:27.833 *********** 2025-05-06 00:57:04.127658 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.127663 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:57:04.127668 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:57:04.127673 | orchestrator | 2025-05-06 00:57:04.127677 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-06 00:57:04.127682 | orchestrator | Tuesday 06 May 2025 00:56:58 +0000 (0:00:00.927) 0:12:28.761 *********** 2025-05-06 00:57:04.127687 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.127692 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.127697 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.127701 | orchestrator | 2025-05-06 00:57:04.127706 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-06 00:57:04.127711 | orchestrator | Tuesday 06 May 2025 00:56:59 +0000 (0:00:00.568) 0:12:29.329 *********** 2025-05-06 00:57:04.127716 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:57:04.127721 | orchestrator | 2025-05-06 00:57:04.127728 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-06 00:57:04.127733 | orchestrator | Tuesday 06 May 2025 00:56:59 +0000 (0:00:00.579) 0:12:29.908 *********** 2025-05-06 00:57:04.127737 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.127742 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.127747 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.127752 | orchestrator | 2025-05-06 00:57:04.127757 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-06 00:57:04.127762 | orchestrator | Tuesday 06 May 2025 00:56:59 +0000 (0:00:00.253) 0:12:30.162 *********** 2025-05-06 00:57:04.127767 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.127771 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.127776 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.127781 | orchestrator | 2025-05-06 00:57:04.127786 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-06 00:57:04.127790 | orchestrator | Tuesday 06 May 2025 00:57:00 +0000 (0:00:01.060) 0:12:31.222 *********** 2025-05-06 00:57:04.127795 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:57:04.127800 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:57:04.127805 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:57:04.127810 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:57:04.127815 | orchestrator | 2025-05-06 00:57:04.127823 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-06 00:57:04.127827 | orchestrator | Tuesday 06 May 2025 00:57:01 +0000 (0:00:00.882) 0:12:32.104 *********** 2025-05-06 00:57:04.127832 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:57:04.127837 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:57:04.127842 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:57:04.127847 | orchestrator | 2025-05-06 00:57:04.127851 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-06 00:57:04.127856 | orchestrator | Tuesday 06 May 2025 00:57:02 +0000 (0:00:00.332) 0:12:32.437 *********** 2025-05-06 00:57:04.127861 | orchestrator | changed: [testbed-node-3] 2025-05-06 00:57:04.127866 | orchestrator | changed: [testbed-node-4] 2025-05-06 00:57:04.127871 | orchestrator | changed: [testbed-node-5] 2025-05-06 00:57:04.127875 | orchestrator | 2025-05-06 00:57:04.127880 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:57:04.127885 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-05-06 00:57:04.127890 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-05-06 00:57:04.127895 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-05-06 00:57:04.127900 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-05-06 00:57:04.127905 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-05-06 00:57:04.127910 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-05-06 00:57:04.127915 | orchestrator | 2025-05-06 00:57:04.127919 | orchestrator | 2025-05-06 00:57:04.127924 | orchestrator | 2025-05-06 00:57:04.127929 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 00:57:04.127934 | orchestrator | Tuesday 06 May 2025 00:57:03 +0000 (0:00:01.198) 0:12:33.635 *********** 2025-05-06 00:57:04.127939 | orchestrator | =============================================================================== 2025-05-06 00:57:04.127943 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 44.49s 2025-05-06 00:57:04.127951 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 39.16s 2025-05-06 00:57:04.127956 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 25.97s 2025-05-06 00:57:04.127960 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.49s 2025-05-06 00:57:04.127965 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.01s 2025-05-06 00:57:04.127970 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.54s 2025-05-06 00:57:04.127975 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.65s 2025-05-06 00:57:04.127979 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.96s 2025-05-06 00:57:04.127984 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 7.95s 2025-05-06 00:57:04.127989 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 7.36s 2025-05-06 00:57:04.127994 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.48s 2025-05-06 00:57:04.127999 | orchestrator | ceph-config : create ceph initial directories --------------------------- 5.95s 2025-05-06 00:57:04.128003 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 5.05s 2025-05-06 00:57:04.128008 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 4.88s 2025-05-06 00:57:04.128016 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 4.29s 2025-05-06 00:57:04.128021 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 3.94s 2025-05-06 00:57:04.128026 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 3.54s 2025-05-06 00:57:04.128032 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 3.46s 2025-05-06 00:57:07.135360 | orchestrator | ceph-crash : create client.crash keyring -------------------------------- 3.31s 2025-05-06 00:57:07.135488 | orchestrator | ceph-facts : find a running mon container ------------------------------- 3.14s 2025-05-06 00:57:07.135523 | orchestrator | 2025-05-06 00:57:07 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:57:07.136234 | orchestrator | 2025-05-06 00:57:07 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:57:07.139804 | orchestrator | 2025-05-06 00:57:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:57:10.195969 | orchestrator | 2025-05-06 00:57:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:57:10.196118 | orchestrator | 2025-05-06 00:57:10 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:57:10.197070 | orchestrator | 2025-05-06 00:57:10 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:57:10.198721 | orchestrator | 2025-05-06 00:57:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:57:13.244834 | orchestrator | 2025-05-06 00:57:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:57:13.244989 | orchestrator | 2025-05-06 00:57:13 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:57:13.245938 | orchestrator | 2025-05-06 00:57:13 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:57:13.247576 | orchestrator | 2025-05-06 00:57:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:57:16.299272 | orchestrator | 2025-05-06 00:57:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:57:16.299415 | orchestrator | 2025-05-06 00:57:16 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:57:16.300402 | orchestrator | 2025-05-06 00:57:16 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:57:16.302167 | orchestrator | 2025-05-06 00:57:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:57:19.347313 | orchestrator | 2025-05-06 00:57:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:57:19.347465 | orchestrator | 2025-05-06 00:57:19 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state STARTED 2025-05-06 00:57:19.348535 | orchestrator | 2025-05-06 00:57:19 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:57:19.349850 | orchestrator | 2025-05-06 00:57:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:57:22.414792 | orchestrator | 2025-05-06 00:57:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:57:22.414973 | orchestrator | 2025-05-06 00:57:22.415006 | orchestrator | 2025-05-06 00:57:22.415031 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-06 00:57:22.415054 | orchestrator | 2025-05-06 00:57:22.415078 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-06 00:57:22.415101 | orchestrator | Tuesday 06 May 2025 00:53:55 +0000 (0:00:00.144) 0:00:00.144 *********** 2025-05-06 00:57:22.415125 | orchestrator | ok: [localhost] => { 2025-05-06 00:57:22.415149 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-06 00:57:22.415207 | orchestrator | } 2025-05-06 00:57:22.415231 | orchestrator | 2025-05-06 00:57:22.415253 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-06 00:57:22.415276 | orchestrator | Tuesday 06 May 2025 00:53:55 +0000 (0:00:00.053) 0:00:00.197 *********** 2025-05-06 00:57:22.415297 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-06 00:57:22.415321 | orchestrator | ...ignoring 2025-05-06 00:57:22.415343 | orchestrator | 2025-05-06 00:57:22.415366 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-06 00:57:22.415391 | orchestrator | Tuesday 06 May 2025 00:53:57 +0000 (0:00:02.458) 0:00:02.655 *********** 2025-05-06 00:57:22.415416 | orchestrator | skipping: [localhost] 2025-05-06 00:57:22.415547 | orchestrator | 2025-05-06 00:57:22.415578 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-06 00:57:22.415604 | orchestrator | Tuesday 06 May 2025 00:53:57 +0000 (0:00:00.031) 0:00:02.687 *********** 2025-05-06 00:57:22.415682 | orchestrator | ok: [localhost] 2025-05-06 00:57:22.415709 | orchestrator | 2025-05-06 00:57:22.415734 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 00:57:22.415757 | orchestrator | 2025-05-06 00:57:22.415780 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 00:57:22.415804 | orchestrator | Tuesday 06 May 2025 00:53:57 +0000 (0:00:00.156) 0:00:02.844 *********** 2025-05-06 00:57:22.415827 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:22.415850 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:22.415874 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:22.415913 | orchestrator | 2025-05-06 00:57:22.415938 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 00:57:22.415967 | orchestrator | Tuesday 06 May 2025 00:53:58 +0000 (0:00:00.315) 0:00:03.159 *********** 2025-05-06 00:57:22.415983 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-06 00:57:22.416002 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-06 00:57:22.416017 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-06 00:57:22.416031 | orchestrator | 2025-05-06 00:57:22.416045 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-06 00:57:22.416059 | orchestrator | 2025-05-06 00:57:22.416073 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-06 00:57:22.416087 | orchestrator | Tuesday 06 May 2025 00:53:58 +0000 (0:00:00.498) 0:00:03.657 *********** 2025-05-06 00:57:22.416100 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-06 00:57:22.416114 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-06 00:57:22.416128 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-06 00:57:22.416142 | orchestrator | 2025-05-06 00:57:22.416155 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-06 00:57:22.416169 | orchestrator | Tuesday 06 May 2025 00:53:59 +0000 (0:00:00.476) 0:00:04.134 *********** 2025-05-06 00:57:22.416184 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:57:22.416199 | orchestrator | 2025-05-06 00:57:22.416212 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-06 00:57:22.416226 | orchestrator | Tuesday 06 May 2025 00:54:00 +0000 (0:00:01.010) 0:00:05.144 *********** 2025-05-06 00:57:22.416266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-06 00:57:22.416399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-06 00:57:22.416418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-06 00:57:22.416444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-06 00:57:22.416469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-06 00:57:22.416484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-06 00:57:22.416498 | orchestrator | 2025-05-06 00:57:22.416513 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-05-06 00:57:22.416527 | orchestrator | Tuesday 06 May 2025 00:54:04 +0000 (0:00:04.293) 0:00:09.437 *********** 2025-05-06 00:57:22.416540 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:22.416556 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:22.416569 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:22.416583 | orchestrator | 2025-05-06 00:57:22.416597 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-05-06 00:57:22.416611 | orchestrator | Tuesday 06 May 2025 00:54:05 +0000 (0:00:00.762) 0:00:10.200 *********** 2025-05-06 00:57:22.416657 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:22.416683 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:22.416705 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:22.416725 | orchestrator | 2025-05-06 00:57:22.416739 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-05-06 00:57:22.416762 | orchestrator | Tuesday 06 May 2025 00:54:06 +0000 (0:00:01.349) 0:00:11.550 *********** 2025-05-06 00:57:22.416811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-06 00:57:22.416840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-06 00:57:22.416860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-06 00:57:22.416893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-06 00:57:22.416909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-06 00:57:22.416924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-06 00:57:22.416938 | orchestrator | 2025-05-06 00:57:22.416952 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-05-06 00:57:22.416966 | orchestrator | Tuesday 06 May 2025 00:54:11 +0000 (0:00:04.698) 0:00:16.248 *********** 2025-05-06 00:57:22.416979 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:22.416993 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:22.417009 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:22.417024 | orchestrator | 2025-05-06 00:57:22.417041 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-05-06 00:57:22.417063 | orchestrator | Tuesday 06 May 2025 00:54:12 +0000 (0:00:01.082) 0:00:17.330 *********** 2025-05-06 00:57:22.417079 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:22.417094 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:22.417110 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:22.417126 | orchestrator | 2025-05-06 00:57:22.417143 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-05-06 00:57:22.417159 | orchestrator | Tuesday 06 May 2025 00:54:18 +0000 (0:00:05.932) 0:00:23.262 *********** 2025-05-06 00:57:22.417182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-06 00:57:22.417199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-06 00:57:22.417227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-06 00:57:22.417250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-06 00:57:22.417267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-06 00:57:22.417282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-06 00:57:22.417296 | orchestrator | 2025-05-06 00:57:22.417317 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-05-06 00:57:22.417331 | orchestrator | Tuesday 06 May 2025 00:54:22 +0000 (0:00:03.818) 0:00:27.081 *********** 2025-05-06 00:57:22.417345 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:22.417359 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:22.417373 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:22.417386 | orchestrator | 2025-05-06 00:57:22.417400 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-05-06 00:57:22.417420 | orchestrator | Tuesday 06 May 2025 00:54:23 +0000 (0:00:01.073) 0:00:28.155 *********** 2025-05-06 00:57:22.417443 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:22.417467 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:22.417488 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:22.417512 | orchestrator | 2025-05-06 00:57:22.417536 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-05-06 00:57:22.417560 | orchestrator | Tuesday 06 May 2025 00:54:23 +0000 (0:00:00.466) 0:00:28.622 *********** 2025-05-06 00:57:22.417582 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:22.417597 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:22.417610 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:22.417676 | orchestrator | 2025-05-06 00:57:22.417693 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-05-06 00:57:22.417707 | orchestrator | Tuesday 06 May 2025 00:54:24 +0000 (0:00:00.513) 0:00:29.135 *********** 2025-05-06 00:57:22.417722 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-05-06 00:57:22.417736 | orchestrator | ...ignoring 2025-05-06 00:57:22.417750 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-05-06 00:57:22.417764 | orchestrator | ...ignoring 2025-05-06 00:57:22.417778 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-05-06 00:57:22.417791 | orchestrator | ...ignoring 2025-05-06 00:57:22.417805 | orchestrator | 2025-05-06 00:57:22.417819 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-05-06 00:57:22.417833 | orchestrator | Tuesday 06 May 2025 00:54:34 +0000 (0:00:10.824) 0:00:39.960 *********** 2025-05-06 00:57:22.417847 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:22.417860 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:22.417874 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:22.417888 | orchestrator | 2025-05-06 00:57:22.417900 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-05-06 00:57:22.417919 | orchestrator | Tuesday 06 May 2025 00:54:35 +0000 (0:00:00.615) 0:00:40.575 *********** 2025-05-06 00:57:22.417931 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:22.417944 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:22.417956 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:22.417969 | orchestrator | 2025-05-06 00:57:22.417981 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-05-06 00:57:22.417993 | orchestrator | Tuesday 06 May 2025 00:54:35 +0000 (0:00:00.479) 0:00:41.054 *********** 2025-05-06 00:57:22.418005 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:22.418050 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:22.418065 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:22.418078 | orchestrator | 2025-05-06 00:57:22.418098 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-05-06 00:57:22.418111 | orchestrator | Tuesday 06 May 2025 00:54:36 +0000 (0:00:00.413) 0:00:41.468 *********** 2025-05-06 00:57:22.418124 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:22.418136 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:22.418148 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:22.418160 | orchestrator | 2025-05-06 00:57:22.418172 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-05-06 00:57:22.418193 | orchestrator | Tuesday 06 May 2025 00:54:36 +0000 (0:00:00.573) 0:00:42.041 *********** 2025-05-06 00:57:22.418205 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:22.418217 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:22.418229 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:22.418241 | orchestrator | 2025-05-06 00:57:22.418254 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-05-06 00:57:22.418266 | orchestrator | Tuesday 06 May 2025 00:54:37 +0000 (0:00:00.610) 0:00:42.652 *********** 2025-05-06 00:57:22.418278 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:22.418291 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:22.418303 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:22.418315 | orchestrator | 2025-05-06 00:57:22.418327 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-06 00:57:22.418339 | orchestrator | Tuesday 06 May 2025 00:54:38 +0000 (0:00:00.524) 0:00:43.176 *********** 2025-05-06 00:57:22.418351 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:22.418364 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:22.418376 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-05-06 00:57:22.418388 | orchestrator | 2025-05-06 00:57:22.418400 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-05-06 00:57:22.418413 | orchestrator | Tuesday 06 May 2025 00:54:38 +0000 (0:00:00.508) 0:00:43.684 *********** 2025-05-06 00:57:22.418425 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:22.418437 | orchestrator | 2025-05-06 00:57:22.418449 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-05-06 00:57:22.418461 | orchestrator | Tuesday 06 May 2025 00:54:49 +0000 (0:00:10.492) 0:00:54.177 *********** 2025-05-06 00:57:22.418473 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:22.418485 | orchestrator | 2025-05-06 00:57:22.418497 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-06 00:57:22.418509 | orchestrator | Tuesday 06 May 2025 00:54:49 +0000 (0:00:00.129) 0:00:54.306 *********** 2025-05-06 00:57:22.418521 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:22.418534 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:22.418546 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:22.418558 | orchestrator | 2025-05-06 00:57:22.418570 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-05-06 00:57:22.418582 | orchestrator | Tuesday 06 May 2025 00:54:50 +0000 (0:00:01.026) 0:00:55.332 *********** 2025-05-06 00:57:22.418594 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:22.418607 | orchestrator | 2025-05-06 00:57:22.418619 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-05-06 00:57:22.418648 | orchestrator | Tuesday 06 May 2025 00:54:57 +0000 (0:00:07.696) 0:01:03.029 *********** 2025-05-06 00:57:22.418661 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2025-05-06 00:57:22.418674 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:22.418686 | orchestrator | 2025-05-06 00:57:22.418698 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-05-06 00:57:22.418710 | orchestrator | Tuesday 06 May 2025 00:55:05 +0000 (0:00:07.247) 0:01:10.276 *********** 2025-05-06 00:57:22.418722 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:22.418734 | orchestrator | 2025-05-06 00:57:22.418746 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-05-06 00:57:22.418758 | orchestrator | Tuesday 06 May 2025 00:55:07 +0000 (0:00:02.484) 0:01:12.760 *********** 2025-05-06 00:57:22.418770 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:22.418782 | orchestrator | 2025-05-06 00:57:22.418795 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-05-06 00:57:22.418807 | orchestrator | Tuesday 06 May 2025 00:55:07 +0000 (0:00:00.117) 0:01:12.878 *********** 2025-05-06 00:57:22.418819 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:22.418845 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:22.418858 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:22.418870 | orchestrator | 2025-05-06 00:57:22.418882 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-05-06 00:57:22.418894 | orchestrator | Tuesday 06 May 2025 00:55:08 +0000 (0:00:00.441) 0:01:13.320 *********** 2025-05-06 00:57:22.418906 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:22.418918 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:22.418931 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:22.418943 | orchestrator | 2025-05-06 00:57:22.418955 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-05-06 00:57:22.418967 | orchestrator | Tuesday 06 May 2025 00:55:08 +0000 (0:00:00.442) 0:01:13.763 *********** 2025-05-06 00:57:22.418984 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-06 00:57:22.418997 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:22.419009 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:22.419021 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:22.419034 | orchestrator | 2025-05-06 00:57:22.419046 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-06 00:57:22.419058 | orchestrator | skipping: no hosts matched 2025-05-06 00:57:22.419070 | orchestrator | 2025-05-06 00:57:22.419082 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-06 00:57:22.419094 | orchestrator | 2025-05-06 00:57:22.419107 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-06 00:57:22.419119 | orchestrator | Tuesday 06 May 2025 00:55:28 +0000 (0:00:20.144) 0:01:33.907 *********** 2025-05-06 00:57:22.419131 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:57:22.419143 | orchestrator | 2025-05-06 00:57:22.419161 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-06 00:57:22.419173 | orchestrator | Tuesday 06 May 2025 00:55:50 +0000 (0:00:21.421) 0:01:55.329 *********** 2025-05-06 00:57:22.419186 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:22.419197 | orchestrator | 2025-05-06 00:57:22.419210 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-06 00:57:22.419222 | orchestrator | Tuesday 06 May 2025 00:56:05 +0000 (0:00:15.536) 0:02:10.866 *********** 2025-05-06 00:57:22.419234 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:22.419246 | orchestrator | 2025-05-06 00:57:22.419258 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-06 00:57:22.419270 | orchestrator | 2025-05-06 00:57:22.419282 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-06 00:57:22.419294 | orchestrator | Tuesday 06 May 2025 00:56:08 +0000 (0:00:02.570) 0:02:13.436 *********** 2025-05-06 00:57:22.419306 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:57:22.419318 | orchestrator | 2025-05-06 00:57:22.419330 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-06 00:57:22.419342 | orchestrator | Tuesday 06 May 2025 00:56:28 +0000 (0:00:20.420) 0:02:33.856 *********** 2025-05-06 00:57:22.419354 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:22.419366 | orchestrator | 2025-05-06 00:57:22.419378 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-06 00:57:22.419390 | orchestrator | Tuesday 06 May 2025 00:56:44 +0000 (0:00:15.551) 0:02:49.408 *********** 2025-05-06 00:57:22.419402 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:22.419414 | orchestrator | 2025-05-06 00:57:22.419426 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-06 00:57:22.419438 | orchestrator | 2025-05-06 00:57:22.419450 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-06 00:57:22.419462 | orchestrator | Tuesday 06 May 2025 00:56:46 +0000 (0:00:02.443) 0:02:51.851 *********** 2025-05-06 00:57:22.419475 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:22.419487 | orchestrator | 2025-05-06 00:57:22.419499 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-06 00:57:22.419517 | orchestrator | Tuesday 06 May 2025 00:57:03 +0000 (0:00:16.796) 0:03:08.647 *********** 2025-05-06 00:57:22.419530 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:22.419542 | orchestrator | 2025-05-06 00:57:22.419554 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-06 00:57:22.419566 | orchestrator | Tuesday 06 May 2025 00:57:04 +0000 (0:00:00.528) 0:03:09.176 *********** 2025-05-06 00:57:22.419578 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:22.419590 | orchestrator | 2025-05-06 00:57:22.419602 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-06 00:57:22.419614 | orchestrator | 2025-05-06 00:57:22.419643 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-06 00:57:22.419656 | orchestrator | Tuesday 06 May 2025 00:57:06 +0000 (0:00:02.526) 0:03:11.702 *********** 2025-05-06 00:57:22.419668 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:57:22.419680 | orchestrator | 2025-05-06 00:57:22.419693 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-05-06 00:57:22.419705 | orchestrator | Tuesday 06 May 2025 00:57:07 +0000 (0:00:00.693) 0:03:12.396 *********** 2025-05-06 00:57:22.419717 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:22.419729 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:22.419741 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:22.419753 | orchestrator | 2025-05-06 00:57:22.419765 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-05-06 00:57:22.419777 | orchestrator | Tuesday 06 May 2025 00:57:09 +0000 (0:00:02.578) 0:03:14.974 *********** 2025-05-06 00:57:22.419790 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:22.419802 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:22.419814 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:22.419829 | orchestrator | 2025-05-06 00:57:22.419849 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-05-06 00:57:22.419869 | orchestrator | Tuesday 06 May 2025 00:57:12 +0000 (0:00:02.351) 0:03:17.326 *********** 2025-05-06 00:57:22.419882 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:22.419895 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:22.419912 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:22.419924 | orchestrator | 2025-05-06 00:57:22.419937 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-05-06 00:57:22.419954 | orchestrator | Tuesday 06 May 2025 00:57:14 +0000 (0:00:02.336) 0:03:19.662 *********** 2025-05-06 00:57:22.419971 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:22.419983 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:22.420000 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:57:22.420017 | orchestrator | 2025-05-06 00:57:22.420029 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-05-06 00:57:22.420041 | orchestrator | Tuesday 06 May 2025 00:57:16 +0000 (0:00:02.252) 0:03:21.915 *********** 2025-05-06 00:57:22.420054 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:57:22.420066 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:57:22.420078 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:57:22.420090 | orchestrator | 2025-05-06 00:57:22.420110 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-06 00:57:22.420124 | orchestrator | Tuesday 06 May 2025 00:57:20 +0000 (0:00:03.278) 0:03:25.193 *********** 2025-05-06 00:57:22.420136 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:57:22.420155 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:57:22.420169 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:57:22.420188 | orchestrator | 2025-05-06 00:57:22.420203 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:57:22.420215 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-06 00:57:22.420228 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-05-06 00:57:22.420255 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-06 00:57:25.463481 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-06 00:57:25.463599 | orchestrator | 2025-05-06 00:57:25.463618 | orchestrator | 2025-05-06 00:57:25.463678 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 00:57:25.463693 | orchestrator | Tuesday 06 May 2025 00:57:20 +0000 (0:00:00.355) 0:03:25.549 *********** 2025-05-06 00:57:25.463705 | orchestrator | =============================================================================== 2025-05-06 00:57:25.463718 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 41.84s 2025-05-06 00:57:25.463730 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.09s 2025-05-06 00:57:25.463743 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 20.14s 2025-05-06 00:57:25.463755 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.80s 2025-05-06 00:57:25.463767 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.83s 2025-05-06 00:57:25.463791 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.49s 2025-05-06 00:57:25.463804 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.70s 2025-05-06 00:57:25.463817 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.25s 2025-05-06 00:57:25.463829 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.93s 2025-05-06 00:57:25.463841 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.01s 2025-05-06 00:57:25.463854 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.70s 2025-05-06 00:57:25.463866 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.29s 2025-05-06 00:57:25.463878 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.82s 2025-05-06 00:57:25.463891 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.28s 2025-05-06 00:57:25.463903 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.58s 2025-05-06 00:57:25.463915 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.53s 2025-05-06 00:57:25.463927 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.48s 2025-05-06 00:57:25.463940 | orchestrator | Check MariaDB service --------------------------------------------------- 2.46s 2025-05-06 00:57:25.463952 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.35s 2025-05-06 00:57:25.463964 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.34s 2025-05-06 00:57:25.463977 | orchestrator | 2025-05-06 00:57:22 | INFO  | Task ec3e1fe6-cc2e-40ec-bc32-d2770f314628 is in state SUCCESS 2025-05-06 00:57:25.463990 | orchestrator | 2025-05-06 00:57:22 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:57:25.464003 | orchestrator | 2025-05-06 00:57:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:57:25.464016 | orchestrator | 2025-05-06 00:57:22 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:57:25.464028 | orchestrator | 2025-05-06 00:57:22 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:57:25.464041 | orchestrator | 2025-05-06 00:57:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:57:25.464070 | orchestrator | 2025-05-06 00:57:25 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:57:25.467119 | orchestrator | 2025-05-06 00:57:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:57:25.468265 | orchestrator | 2025-05-06 00:57:25 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:57:25.469483 | orchestrator | 2025-05-06 00:57:25 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:57:28.518308 | orchestrator | 2025-05-06 00:57:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:57:28.518463 | orchestrator | 2025-05-06 00:57:28 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:57:31.563113 | orchestrator | 2025-05-06 00:57:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:57:31.563233 | orchestrator | 2025-05-06 00:57:28 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:57:31.563254 | orchestrator | 2025-05-06 00:57:28 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:57:31.563271 | orchestrator | 2025-05-06 00:57:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:57:31.563306 | orchestrator | 2025-05-06 00:57:31 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:57:31.563881 | orchestrator | 2025-05-06 00:57:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:57:31.563994 | orchestrator | 2025-05-06 00:57:31 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:57:31.564021 | orchestrator | 2025-05-06 00:57:31 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:57:34.601807 | orchestrator | 2025-05-06 00:57:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:57:34.601968 | orchestrator | 2025-05-06 00:57:34 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:57:34.604414 | orchestrator | 2025-05-06 00:57:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:57:34.604458 | orchestrator | 2025-05-06 00:57:34 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:57:34.604912 | orchestrator | 2025-05-06 00:57:34 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:57:37.630314 | orchestrator | 2025-05-06 00:57:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:57:37.630447 | orchestrator | 2025-05-06 00:57:37 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:57:37.631678 | orchestrator | 2025-05-06 00:57:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:57:37.632925 | orchestrator | 2025-05-06 00:57:37 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:57:37.633931 | orchestrator | 2025-05-06 00:57:37 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:57:40.670892 | orchestrator | 2025-05-06 00:57:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:57:40.671015 | orchestrator | 2025-05-06 00:57:40 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:57:40.672038 | orchestrator | 2025-05-06 00:57:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:57:40.672974 | orchestrator | 2025-05-06 00:57:40 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:57:40.674315 | orchestrator | 2025-05-06 00:57:40 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:57:43.720919 | orchestrator | 2025-05-06 00:57:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:57:43.721060 | orchestrator | 2025-05-06 00:57:43 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:57:43.722319 | orchestrator | 2025-05-06 00:57:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:57:43.723348 | orchestrator | 2025-05-06 00:57:43 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:57:43.723789 | orchestrator | 2025-05-06 00:57:43 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:57:43.724041 | orchestrator | 2025-05-06 00:57:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:57:46.780117 | orchestrator | 2025-05-06 00:57:46 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:57:46.781562 | orchestrator | 2025-05-06 00:57:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:57:46.782920 | orchestrator | 2025-05-06 00:57:46 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:57:46.784152 | orchestrator | 2025-05-06 00:57:46 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:57:46.784554 | orchestrator | 2025-05-06 00:57:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:57:49.834347 | orchestrator | 2025-05-06 00:57:49 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:57:49.842121 | orchestrator | 2025-05-06 00:57:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:57:49.843357 | orchestrator | 2025-05-06 00:57:49 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:57:49.844448 | orchestrator | 2025-05-06 00:57:49 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:57:52.897509 | orchestrator | 2025-05-06 00:57:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:57:52.897814 | orchestrator | 2025-05-06 00:57:52 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:57:52.900217 | orchestrator | 2025-05-06 00:57:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:57:52.900286 | orchestrator | 2025-05-06 00:57:52 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:57:52.900816 | orchestrator | 2025-05-06 00:57:52 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:57:55.945027 | orchestrator | 2025-05-06 00:57:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:57:55.945342 | orchestrator | 2025-05-06 00:57:55 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:57:55.946199 | orchestrator | 2025-05-06 00:57:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:57:55.946247 | orchestrator | 2025-05-06 00:57:55 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:57:55.952150 | orchestrator | 2025-05-06 00:57:55 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:57:59.018960 | orchestrator | 2025-05-06 00:57:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:57:59.019097 | orchestrator | 2025-05-06 00:57:59 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:57:59.019952 | orchestrator | 2025-05-06 00:57:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:57:59.020541 | orchestrator | 2025-05-06 00:57:59 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:57:59.020666 | orchestrator | 2025-05-06 00:57:59 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:58:02.063279 | orchestrator | 2025-05-06 00:57:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:58:02.063429 | orchestrator | 2025-05-06 00:58:02 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:58:02.066403 | orchestrator | 2025-05-06 00:58:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:58:02.070705 | orchestrator | 2025-05-06 00:58:02 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:58:02.072334 | orchestrator | 2025-05-06 00:58:02 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:58:05.147893 | orchestrator | 2025-05-06 00:58:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:58:05.148036 | orchestrator | 2025-05-06 00:58:05 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:58:05.150646 | orchestrator | 2025-05-06 00:58:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:58:05.153605 | orchestrator | 2025-05-06 00:58:05 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:58:05.155825 | orchestrator | 2025-05-06 00:58:05 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:58:08.208181 | orchestrator | 2025-05-06 00:58:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:58:08.208325 | orchestrator | 2025-05-06 00:58:08 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:58:08.210085 | orchestrator | 2025-05-06 00:58:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:58:08.211246 | orchestrator | 2025-05-06 00:58:08 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:58:08.212812 | orchestrator | 2025-05-06 00:58:08 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:58:08.213100 | orchestrator | 2025-05-06 00:58:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:58:11.276527 | orchestrator | 2025-05-06 00:58:11 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:58:11.278946 | orchestrator | 2025-05-06 00:58:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:58:11.280322 | orchestrator | 2025-05-06 00:58:11 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:58:11.281752 | orchestrator | 2025-05-06 00:58:11 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:58:14.323452 | orchestrator | 2025-05-06 00:58:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:58:14.323636 | orchestrator | 2025-05-06 00:58:14 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:58:14.324812 | orchestrator | 2025-05-06 00:58:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:58:14.326308 | orchestrator | 2025-05-06 00:58:14 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:58:14.327148 | orchestrator | 2025-05-06 00:58:14 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:58:14.327257 | orchestrator | 2025-05-06 00:58:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:58:17.385283 | orchestrator | 2025-05-06 00:58:17 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:58:17.387248 | orchestrator | 2025-05-06 00:58:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:58:17.389838 | orchestrator | 2025-05-06 00:58:17 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:58:17.393165 | orchestrator | 2025-05-06 00:58:17 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:58:20.437879 | orchestrator | 2025-05-06 00:58:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:58:20.438102 | orchestrator | 2025-05-06 00:58:20 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:58:20.439863 | orchestrator | 2025-05-06 00:58:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:58:20.442850 | orchestrator | 2025-05-06 00:58:20 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:58:20.446911 | orchestrator | 2025-05-06 00:58:20 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:58:20.447465 | orchestrator | 2025-05-06 00:58:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:58:23.497602 | orchestrator | 2025-05-06 00:58:23 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:58:23.499689 | orchestrator | 2025-05-06 00:58:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:58:23.502736 | orchestrator | 2025-05-06 00:58:23 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:58:23.505248 | orchestrator | 2025-05-06 00:58:23 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:58:26.568355 | orchestrator | 2025-05-06 00:58:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:58:26.568503 | orchestrator | 2025-05-06 00:58:26 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:58:26.569490 | orchestrator | 2025-05-06 00:58:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:58:26.573059 | orchestrator | 2025-05-06 00:58:26 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:58:26.574592 | orchestrator | 2025-05-06 00:58:26 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:58:29.618131 | orchestrator | 2025-05-06 00:58:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:58:29.618278 | orchestrator | 2025-05-06 00:58:29 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:58:29.620482 | orchestrator | 2025-05-06 00:58:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:58:29.621849 | orchestrator | 2025-05-06 00:58:29 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:58:29.625756 | orchestrator | 2025-05-06 00:58:29 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:58:32.678303 | orchestrator | 2025-05-06 00:58:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:58:32.678452 | orchestrator | 2025-05-06 00:58:32 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:58:32.679983 | orchestrator | 2025-05-06 00:58:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:58:32.681685 | orchestrator | 2025-05-06 00:58:32 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:58:32.682868 | orchestrator | 2025-05-06 00:58:32 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:58:35.730772 | orchestrator | 2025-05-06 00:58:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:58:35.730948 | orchestrator | 2025-05-06 00:58:35 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:58:35.732461 | orchestrator | 2025-05-06 00:58:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:58:35.734861 | orchestrator | 2025-05-06 00:58:35 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:58:35.737578 | orchestrator | 2025-05-06 00:58:35 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:58:38.783650 | orchestrator | 2025-05-06 00:58:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:58:38.783802 | orchestrator | 2025-05-06 00:58:38 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:58:38.785682 | orchestrator | 2025-05-06 00:58:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:58:38.788257 | orchestrator | 2025-05-06 00:58:38 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:58:38.790246 | orchestrator | 2025-05-06 00:58:38 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:58:41.838102 | orchestrator | 2025-05-06 00:58:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:58:41.838252 | orchestrator | 2025-05-06 00:58:41 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:58:41.838958 | orchestrator | 2025-05-06 00:58:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:58:41.840283 | orchestrator | 2025-05-06 00:58:41 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:58:41.841891 | orchestrator | 2025-05-06 00:58:41 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:58:44.889309 | orchestrator | 2025-05-06 00:58:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:58:44.889459 | orchestrator | 2025-05-06 00:58:44 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:58:44.890671 | orchestrator | 2025-05-06 00:58:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:58:44.892112 | orchestrator | 2025-05-06 00:58:44 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:58:44.893553 | orchestrator | 2025-05-06 00:58:44 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:58:47.949144 | orchestrator | 2025-05-06 00:58:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:58:47.949260 | orchestrator | 2025-05-06 00:58:47 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:58:47.950181 | orchestrator | 2025-05-06 00:58:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:58:47.951439 | orchestrator | 2025-05-06 00:58:47 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:58:47.952675 | orchestrator | 2025-05-06 00:58:47 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:58:51.002853 | orchestrator | 2025-05-06 00:58:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:58:51.003020 | orchestrator | 2025-05-06 00:58:51 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:58:51.004600 | orchestrator | 2025-05-06 00:58:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:58:51.007191 | orchestrator | 2025-05-06 00:58:51 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:58:51.009004 | orchestrator | 2025-05-06 00:58:51 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:58:51.009249 | orchestrator | 2025-05-06 00:58:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:58:54.060190 | orchestrator | 2025-05-06 00:58:54 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:58:54.062187 | orchestrator | 2025-05-06 00:58:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:58:54.063995 | orchestrator | 2025-05-06 00:58:54 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:58:54.065716 | orchestrator | 2025-05-06 00:58:54 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:58:54.065821 | orchestrator | 2025-05-06 00:58:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:58:57.113862 | orchestrator | 2025-05-06 00:58:57 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:58:57.114968 | orchestrator | 2025-05-06 00:58:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:58:57.116790 | orchestrator | 2025-05-06 00:58:57 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:58:57.118589 | orchestrator | 2025-05-06 00:58:57 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:59:00.171206 | orchestrator | 2025-05-06 00:58:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:59:00.171355 | orchestrator | 2025-05-06 00:59:00 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:59:00.174730 | orchestrator | 2025-05-06 00:59:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:59:00.174857 | orchestrator | 2025-05-06 00:59:00 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:59:00.177077 | orchestrator | 2025-05-06 00:59:00 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:59:03.221906 | orchestrator | 2025-05-06 00:59:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:59:03.222117 | orchestrator | 2025-05-06 00:59:03 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:59:03.223084 | orchestrator | 2025-05-06 00:59:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:59:03.224940 | orchestrator | 2025-05-06 00:59:03 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state STARTED 2025-05-06 00:59:03.227278 | orchestrator | 2025-05-06 00:59:03 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:59:06.272135 | orchestrator | 2025-05-06 00:59:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:59:06.272278 | orchestrator | 2025-05-06 00:59:06 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:59:06.273294 | orchestrator | 2025-05-06 00:59:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:59:06.274221 | orchestrator | 2025-05-06 00:59:06 | INFO  | Task 687b137d-2813-4873-b155-ab97472670a2 is in state SUCCESS 2025-05-06 00:59:06.274429 | orchestrator | 2025-05-06 00:59:06.276269 | orchestrator | 2025-05-06 00:59:06.276306 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 00:59:06.276321 | orchestrator | 2025-05-06 00:59:06.276356 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 00:59:06.276371 | orchestrator | Tuesday 06 May 2025 00:57:23 +0000 (0:00:00.301) 0:00:00.301 *********** 2025-05-06 00:59:06.276385 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:06.276425 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:59:06.276440 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:59:06.276454 | orchestrator | 2025-05-06 00:59:06.276469 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 00:59:06.276482 | orchestrator | Tuesday 06 May 2025 00:57:24 +0000 (0:00:00.414) 0:00:00.715 *********** 2025-05-06 00:59:06.276496 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-06 00:59:06.276537 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-06 00:59:06.276552 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-06 00:59:06.276566 | orchestrator | 2025-05-06 00:59:06.276579 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-06 00:59:06.276594 | orchestrator | 2025-05-06 00:59:06.276607 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-06 00:59:06.276621 | orchestrator | Tuesday 06 May 2025 00:57:24 +0000 (0:00:00.414) 0:00:01.130 *********** 2025-05-06 00:59:06.276635 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:59:06.276650 | orchestrator | 2025-05-06 00:59:06.277039 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-06 00:59:06.277093 | orchestrator | Tuesday 06 May 2025 00:57:25 +0000 (0:00:00.728) 0:00:01.859 *********** 2025-05-06 00:59:06.277124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-06 00:59:06.277175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-06 00:59:06.277225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-06 00:59:06.277253 | orchestrator | 2025-05-06 00:59:06.277277 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-06 00:59:06.277311 | orchestrator | Tuesday 06 May 2025 00:57:26 +0000 (0:00:01.468) 0:00:03.327 *********** 2025-05-06 00:59:06.277327 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:06.277341 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:59:06.277356 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:59:06.277369 | orchestrator | 2025-05-06 00:59:06.277384 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-06 00:59:06.277398 | orchestrator | Tuesday 06 May 2025 00:57:27 +0000 (0:00:00.256) 0:00:03.584 *********** 2025-05-06 00:59:06.277422 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-06 00:59:06.277437 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-06 00:59:06.277451 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-06 00:59:06.277465 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-06 00:59:06.277478 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-06 00:59:06.277493 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-06 00:59:06.277544 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-06 00:59:06.277559 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-06 00:59:06.277572 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-06 00:59:06.277586 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-06 00:59:06.277600 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-06 00:59:06.277615 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-06 00:59:06.277632 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-06 00:59:06.277648 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-06 00:59:06.277665 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-06 00:59:06.277681 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-06 00:59:06.277697 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-06 00:59:06.277713 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-06 00:59:06.277739 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-06 00:59:06.277757 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-06 00:59:06.277773 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-06 00:59:06.277789 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-06 00:59:06.277814 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-06 00:59:06.277830 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-06 00:59:06.277847 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-06 00:59:06.277863 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-05-06 00:59:06.277880 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-06 00:59:06.277904 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-06 00:59:06.277919 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-06 00:59:06.277934 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-06 00:59:06.277950 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-06 00:59:06.277967 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-06 00:59:06.277984 | orchestrator | 2025-05-06 00:59:06.277998 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-06 00:59:06.278012 | orchestrator | Tuesday 06 May 2025 00:57:28 +0000 (0:00:00.982) 0:00:04.567 *********** 2025-05-06 00:59:06.278088 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:06.278103 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:59:06.278117 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:59:06.278130 | orchestrator | 2025-05-06 00:59:06.278144 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-06 00:59:06.278157 | orchestrator | Tuesday 06 May 2025 00:57:28 +0000 (0:00:00.418) 0:00:04.985 *********** 2025-05-06 00:59:06.278171 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.278186 | orchestrator | 2025-05-06 00:59:06.278208 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-06 00:59:06.278222 | orchestrator | Tuesday 06 May 2025 00:57:28 +0000 (0:00:00.123) 0:00:05.109 *********** 2025-05-06 00:59:06.278236 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.278250 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:06.278263 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:06.278277 | orchestrator | 2025-05-06 00:59:06.278290 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-06 00:59:06.278304 | orchestrator | Tuesday 06 May 2025 00:57:29 +0000 (0:00:00.478) 0:00:05.588 *********** 2025-05-06 00:59:06.278318 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:06.278331 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:59:06.278352 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:59:06.278365 | orchestrator | 2025-05-06 00:59:06.278379 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-06 00:59:06.278393 | orchestrator | Tuesday 06 May 2025 00:57:29 +0000 (0:00:00.296) 0:00:05.885 *********** 2025-05-06 00:59:06.278408 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.278422 | orchestrator | 2025-05-06 00:59:06.278436 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-06 00:59:06.278449 | orchestrator | Tuesday 06 May 2025 00:57:29 +0000 (0:00:00.340) 0:00:06.226 *********** 2025-05-06 00:59:06.278463 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.278477 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:06.278491 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:06.278533 | orchestrator | 2025-05-06 00:59:06.278548 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-06 00:59:06.278562 | orchestrator | Tuesday 06 May 2025 00:57:30 +0000 (0:00:00.600) 0:00:06.826 *********** 2025-05-06 00:59:06.278575 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:06.278590 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:59:06.278603 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:59:06.278617 | orchestrator | 2025-05-06 00:59:06.278631 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-06 00:59:06.278644 | orchestrator | Tuesday 06 May 2025 00:57:30 +0000 (0:00:00.466) 0:00:07.293 *********** 2025-05-06 00:59:06.278666 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.278680 | orchestrator | 2025-05-06 00:59:06.278694 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-06 00:59:06.278708 | orchestrator | Tuesday 06 May 2025 00:57:30 +0000 (0:00:00.117) 0:00:07.410 *********** 2025-05-06 00:59:06.278722 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.278736 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:06.278750 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:06.278764 | orchestrator | 2025-05-06 00:59:06.278778 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-06 00:59:06.278792 | orchestrator | Tuesday 06 May 2025 00:57:31 +0000 (0:00:00.569) 0:00:07.980 *********** 2025-05-06 00:59:06.278805 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:06.278819 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:59:06.278833 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:59:06.278847 | orchestrator | 2025-05-06 00:59:06.278861 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-06 00:59:06.278875 | orchestrator | Tuesday 06 May 2025 00:57:32 +0000 (0:00:00.597) 0:00:08.578 *********** 2025-05-06 00:59:06.278889 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.278903 | orchestrator | 2025-05-06 00:59:06.278917 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-06 00:59:06.278931 | orchestrator | Tuesday 06 May 2025 00:57:32 +0000 (0:00:00.124) 0:00:08.702 *********** 2025-05-06 00:59:06.278945 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.278959 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:06.278973 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:06.278987 | orchestrator | 2025-05-06 00:59:06.279001 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-06 00:59:06.279015 | orchestrator | Tuesday 06 May 2025 00:57:32 +0000 (0:00:00.537) 0:00:09.240 *********** 2025-05-06 00:59:06.279029 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:06.279043 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:59:06.279057 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:59:06.279070 | orchestrator | 2025-05-06 00:59:06.279084 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-06 00:59:06.279099 | orchestrator | Tuesday 06 May 2025 00:57:33 +0000 (0:00:00.332) 0:00:09.573 *********** 2025-05-06 00:59:06.279113 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.279127 | orchestrator | 2025-05-06 00:59:06.279141 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-06 00:59:06.279160 | orchestrator | Tuesday 06 May 2025 00:57:33 +0000 (0:00:00.263) 0:00:09.836 *********** 2025-05-06 00:59:06.279174 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.279188 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:06.279202 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:06.279216 | orchestrator | 2025-05-06 00:59:06.279230 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-06 00:59:06.279244 | orchestrator | Tuesday 06 May 2025 00:57:33 +0000 (0:00:00.602) 0:00:10.439 *********** 2025-05-06 00:59:06.279257 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:06.279271 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:59:06.279285 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:59:06.279299 | orchestrator | 2025-05-06 00:59:06.279313 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-06 00:59:06.279326 | orchestrator | Tuesday 06 May 2025 00:57:34 +0000 (0:00:00.712) 0:00:11.151 *********** 2025-05-06 00:59:06.279340 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.279354 | orchestrator | 2025-05-06 00:59:06.279368 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-06 00:59:06.279382 | orchestrator | Tuesday 06 May 2025 00:57:34 +0000 (0:00:00.167) 0:00:11.319 *********** 2025-05-06 00:59:06.279396 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.279409 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:06.279423 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:06.279443 | orchestrator | 2025-05-06 00:59:06.279457 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-06 00:59:06.279471 | orchestrator | Tuesday 06 May 2025 00:57:35 +0000 (0:00:00.471) 0:00:11.790 *********** 2025-05-06 00:59:06.279491 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:06.279551 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:59:06.279566 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:59:06.279580 | orchestrator | 2025-05-06 00:59:06.279594 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-06 00:59:06.279608 | orchestrator | Tuesday 06 May 2025 00:57:35 +0000 (0:00:00.435) 0:00:12.226 *********** 2025-05-06 00:59:06.279622 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.279635 | orchestrator | 2025-05-06 00:59:06.279649 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-06 00:59:06.279662 | orchestrator | Tuesday 06 May 2025 00:57:35 +0000 (0:00:00.159) 0:00:12.386 *********** 2025-05-06 00:59:06.279676 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.279690 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:06.279704 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:06.279717 | orchestrator | 2025-05-06 00:59:06.279731 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-06 00:59:06.279745 | orchestrator | Tuesday 06 May 2025 00:57:36 +0000 (0:00:00.292) 0:00:12.679 *********** 2025-05-06 00:59:06.279758 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:06.279772 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:59:06.279786 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:59:06.279799 | orchestrator | 2025-05-06 00:59:06.279813 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-06 00:59:06.279827 | orchestrator | Tuesday 06 May 2025 00:57:36 +0000 (0:00:00.323) 0:00:13.003 *********** 2025-05-06 00:59:06.279840 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.279854 | orchestrator | 2025-05-06 00:59:06.279867 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-06 00:59:06.279881 | orchestrator | Tuesday 06 May 2025 00:57:36 +0000 (0:00:00.114) 0:00:13.117 *********** 2025-05-06 00:59:06.279895 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.279909 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:06.279922 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:06.279936 | orchestrator | 2025-05-06 00:59:06.279949 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-06 00:59:06.279963 | orchestrator | Tuesday 06 May 2025 00:57:36 +0000 (0:00:00.310) 0:00:13.428 *********** 2025-05-06 00:59:06.279977 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:06.279991 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:59:06.280004 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:59:06.280018 | orchestrator | 2025-05-06 00:59:06.280032 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-06 00:59:06.280046 | orchestrator | Tuesday 06 May 2025 00:57:37 +0000 (0:00:00.235) 0:00:13.663 *********** 2025-05-06 00:59:06.280060 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.280074 | orchestrator | 2025-05-06 00:59:06.280087 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-06 00:59:06.280101 | orchestrator | Tuesday 06 May 2025 00:57:37 +0000 (0:00:00.081) 0:00:13.744 *********** 2025-05-06 00:59:06.280115 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.280139 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:06.280155 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:06.280169 | orchestrator | 2025-05-06 00:59:06.280183 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-06 00:59:06.280197 | orchestrator | Tuesday 06 May 2025 00:57:37 +0000 (0:00:00.308) 0:00:14.053 *********** 2025-05-06 00:59:06.280210 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:06.280224 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:59:06.280237 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:59:06.280251 | orchestrator | 2025-05-06 00:59:06.280271 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-06 00:59:06.280285 | orchestrator | Tuesday 06 May 2025 00:57:37 +0000 (0:00:00.324) 0:00:14.378 *********** 2025-05-06 00:59:06.280299 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.280312 | orchestrator | 2025-05-06 00:59:06.280326 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-06 00:59:06.280340 | orchestrator | Tuesday 06 May 2025 00:57:37 +0000 (0:00:00.095) 0:00:14.473 *********** 2025-05-06 00:59:06.280353 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.280367 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:06.280381 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:06.280395 | orchestrator | 2025-05-06 00:59:06.280408 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-06 00:59:06.280426 | orchestrator | Tuesday 06 May 2025 00:57:38 +0000 (0:00:00.353) 0:00:14.827 *********** 2025-05-06 00:59:06.280440 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:06.280454 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:59:06.280468 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:59:06.280482 | orchestrator | 2025-05-06 00:59:06.280496 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-06 00:59:06.280528 | orchestrator | Tuesday 06 May 2025 00:57:38 +0000 (0:00:00.524) 0:00:15.352 *********** 2025-05-06 00:59:06.280542 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.280556 | orchestrator | 2025-05-06 00:59:06.280570 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-06 00:59:06.280583 | orchestrator | Tuesday 06 May 2025 00:57:38 +0000 (0:00:00.101) 0:00:15.454 *********** 2025-05-06 00:59:06.280596 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.280610 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:06.280624 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:06.280638 | orchestrator | 2025-05-06 00:59:06.280651 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-06 00:59:06.280665 | orchestrator | Tuesday 06 May 2025 00:57:39 +0000 (0:00:00.723) 0:00:16.177 *********** 2025-05-06 00:59:06.280679 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:59:06.280692 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:59:06.280706 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:59:06.280720 | orchestrator | 2025-05-06 00:59:06.280733 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-06 00:59:06.280747 | orchestrator | Tuesday 06 May 2025 00:57:42 +0000 (0:00:02.975) 0:00:19.153 *********** 2025-05-06 00:59:06.280761 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-06 00:59:06.280781 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-06 00:59:06.280796 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-06 00:59:06.280810 | orchestrator | 2025-05-06 00:59:06.280824 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-06 00:59:06.280838 | orchestrator | Tuesday 06 May 2025 00:57:45 +0000 (0:00:02.679) 0:00:21.832 *********** 2025-05-06 00:59:06.280851 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-06 00:59:06.280866 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-06 00:59:06.280880 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-06 00:59:06.280894 | orchestrator | 2025-05-06 00:59:06.280908 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-06 00:59:06.280921 | orchestrator | Tuesday 06 May 2025 00:57:47 +0000 (0:00:02.430) 0:00:24.263 *********** 2025-05-06 00:59:06.280935 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-06 00:59:06.280949 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-06 00:59:06.280976 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-06 00:59:06.280990 | orchestrator | 2025-05-06 00:59:06.281003 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-06 00:59:06.281023 | orchestrator | Tuesday 06 May 2025 00:57:49 +0000 (0:00:02.036) 0:00:26.299 *********** 2025-05-06 00:59:06.281038 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.281052 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:06.281066 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:06.281079 | orchestrator | 2025-05-06 00:59:06.281093 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-06 00:59:06.281107 | orchestrator | Tuesday 06 May 2025 00:57:50 +0000 (0:00:00.279) 0:00:26.578 *********** 2025-05-06 00:59:06.281121 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.281134 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:06.281148 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:06.281162 | orchestrator | 2025-05-06 00:59:06.281176 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-06 00:59:06.281189 | orchestrator | Tuesday 06 May 2025 00:57:50 +0000 (0:00:00.531) 0:00:27.109 *********** 2025-05-06 00:59:06.281204 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:59:06.281218 | orchestrator | 2025-05-06 00:59:06.281231 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-06 00:59:06.281245 | orchestrator | Tuesday 06 May 2025 00:57:51 +0000 (0:00:00.766) 0:00:27.876 *********** 2025-05-06 00:59:06.281267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-06 00:59:06.281285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-06 00:59:06.281315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-06 00:59:06.281337 | orchestrator | 2025-05-06 00:59:06.281352 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-06 00:59:06.281371 | orchestrator | Tuesday 06 May 2025 00:57:52 +0000 (0:00:01.517) 0:00:29.393 *********** 2025-05-06 00:59:06.281386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-06 00:59:06.281401 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.281425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-06 00:59:06.281448 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:06.281463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-06 00:59:06.281477 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:06.281491 | orchestrator | 2025-05-06 00:59:06.281549 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-06 00:59:06.281565 | orchestrator | Tuesday 06 May 2025 00:57:53 +0000 (0:00:00.924) 0:00:30.318 *********** 2025-05-06 00:59:06.281590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-06 00:59:06.281613 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.281628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-06 00:59:06.281643 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:06.281666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-06 00:59:06.281688 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:06.281703 | orchestrator | 2025-05-06 00:59:06.281717 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-06 00:59:06.281731 | orchestrator | Tuesday 06 May 2025 00:57:54 +0000 (0:00:01.219) 0:00:31.538 *********** 2025-05-06 00:59:06.281751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-06 00:59:06.281775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-06 00:59:06.281799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-06 00:59:06.281821 | orchestrator | 2025-05-06 00:59:06.281835 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-06 00:59:06.281847 | orchestrator | Tuesday 06 May 2025 00:58:00 +0000 (0:00:05.105) 0:00:36.644 *********** 2025-05-06 00:59:06.281860 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:06.281872 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:06.281884 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:06.281897 | orchestrator | 2025-05-06 00:59:06.281909 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-06 00:59:06.281922 | orchestrator | Tuesday 06 May 2025 00:58:00 +0000 (0:00:00.389) 0:00:37.033 *********** 2025-05-06 00:59:06.281934 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:59:06.281947 | orchestrator | 2025-05-06 00:59:06.281959 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-06 00:59:06.281972 | orchestrator | Tuesday 06 May 2025 00:58:01 +0000 (0:00:00.537) 0:00:37.571 *********** 2025-05-06 00:59:06.281989 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:59:06.282002 | orchestrator | 2025-05-06 00:59:06.282037 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-05-06 00:59:06.282053 | orchestrator | Tuesday 06 May 2025 00:58:03 +0000 (0:00:02.518) 0:00:40.090 *********** 2025-05-06 00:59:06.282065 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:59:06.282078 | orchestrator | 2025-05-06 00:59:06.282090 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-05-06 00:59:06.282102 | orchestrator | Tuesday 06 May 2025 00:58:05 +0000 (0:00:02.324) 0:00:42.414 *********** 2025-05-06 00:59:06.282114 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:59:06.282126 | orchestrator | 2025-05-06 00:59:06.282139 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-06 00:59:06.282151 | orchestrator | Tuesday 06 May 2025 00:58:19 +0000 (0:00:14.078) 0:00:56.493 *********** 2025-05-06 00:59:06.282163 | orchestrator | 2025-05-06 00:59:06.282176 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-06 00:59:06.282188 | orchestrator | Tuesday 06 May 2025 00:58:20 +0000 (0:00:00.078) 0:00:56.572 *********** 2025-05-06 00:59:06.282200 | orchestrator | 2025-05-06 00:59:06.282212 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-06 00:59:06.282225 | orchestrator | Tuesday 06 May 2025 00:58:20 +0000 (0:00:00.241) 0:00:56.813 *********** 2025-05-06 00:59:06.282237 | orchestrator | 2025-05-06 00:59:06.282249 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-05-06 00:59:06.282262 | orchestrator | Tuesday 06 May 2025 00:58:20 +0000 (0:00:00.066) 0:00:56.880 *********** 2025-05-06 00:59:06.282274 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:59:06.282287 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:59:06.282299 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:59:06.282311 | orchestrator | 2025-05-06 00:59:06.282323 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:59:06.282336 | orchestrator | testbed-node-0 : ok=39  changed=11  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-06 00:59:06.282355 | orchestrator | testbed-node-1 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-06 00:59:06.282368 | orchestrator | testbed-node-2 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-06 00:59:06.282380 | orchestrator | 2025-05-06 00:59:06.282392 | orchestrator | 2025-05-06 00:59:06.282405 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 00:59:06.282417 | orchestrator | Tuesday 06 May 2025 00:59:03 +0000 (0:00:43.424) 0:01:40.305 *********** 2025-05-06 00:59:06.282430 | orchestrator | =============================================================================== 2025-05-06 00:59:06.282442 | orchestrator | horizon : Restart horizon container ------------------------------------ 43.42s 2025-05-06 00:59:06.282454 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.08s 2025-05-06 00:59:06.282467 | orchestrator | horizon : Deploy horizon container -------------------------------------- 5.11s 2025-05-06 00:59:06.282479 | orchestrator | horizon : Copying over config.json files for services ------------------- 2.98s 2025-05-06 00:59:06.282491 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.68s 2025-05-06 00:59:06.282560 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.52s 2025-05-06 00:59:06.282573 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.43s 2025-05-06 00:59:06.282586 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.32s 2025-05-06 00:59:06.282598 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.04s 2025-05-06 00:59:06.282611 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.52s 2025-05-06 00:59:06.282623 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.47s 2025-05-06 00:59:06.282635 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.22s 2025-05-06 00:59:06.282648 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.98s 2025-05-06 00:59:06.282665 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.92s 2025-05-06 00:59:09.324345 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2025-05-06 00:59:09.324553 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2025-05-06 00:59:09.324580 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.72s 2025-05-06 00:59:09.324595 | orchestrator | horizon : Update policy file name --------------------------------------- 0.71s 2025-05-06 00:59:09.324609 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.60s 2025-05-06 00:59:09.324623 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.60s 2025-05-06 00:59:09.324639 | orchestrator | 2025-05-06 00:59:06 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:59:09.324653 | orchestrator | 2025-05-06 00:59:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:59:09.324686 | orchestrator | 2025-05-06 00:59:09 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:59:09.325900 | orchestrator | 2025-05-06 00:59:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:59:09.327784 | orchestrator | 2025-05-06 00:59:09 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:59:12.387949 | orchestrator | 2025-05-06 00:59:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:59:12.388089 | orchestrator | 2025-05-06 00:59:12 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:59:12.388897 | orchestrator | 2025-05-06 00:59:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:59:12.389778 | orchestrator | 2025-05-06 00:59:12 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:59:15.436395 | orchestrator | 2025-05-06 00:59:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:59:15.436610 | orchestrator | 2025-05-06 00:59:15 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state STARTED 2025-05-06 00:59:15.438868 | orchestrator | 2025-05-06 00:59:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:59:15.440544 | orchestrator | 2025-05-06 00:59:15 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:59:18.505386 | orchestrator | 2025-05-06 00:59:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:59:18.505636 | orchestrator | 2025-05-06 00:59:18.506197 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-06 00:59:18.506221 | orchestrator | 2025-05-06 00:59:18.506236 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-06 00:59:18.506326 | orchestrator | 2025-05-06 00:59:18.506344 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-06 00:59:18.506359 | orchestrator | Tuesday 06 May 2025 00:57:08 +0000 (0:00:01.091) 0:00:01.091 *********** 2025-05-06 00:59:18.506375 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:59:18.506391 | orchestrator | 2025-05-06 00:59:18.506406 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-06 00:59:18.506420 | orchestrator | Tuesday 06 May 2025 00:57:08 +0000 (0:00:00.472) 0:00:01.563 *********** 2025-05-06 00:59:18.506435 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-05-06 00:59:18.506450 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-05-06 00:59:18.506465 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-05-06 00:59:18.506479 | orchestrator | 2025-05-06 00:59:18.506528 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-06 00:59:18.506542 | orchestrator | Tuesday 06 May 2025 00:57:09 +0000 (0:00:00.746) 0:00:02.309 *********** 2025-05-06 00:59:18.506556 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:59:18.506570 | orchestrator | 2025-05-06 00:59:18.506584 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-06 00:59:18.506598 | orchestrator | Tuesday 06 May 2025 00:57:10 +0000 (0:00:00.703) 0:00:03.013 *********** 2025-05-06 00:59:18.506611 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:59:18.506626 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:59:18.506640 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:59:18.506654 | orchestrator | 2025-05-06 00:59:18.506668 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-06 00:59:18.506682 | orchestrator | Tuesday 06 May 2025 00:57:10 +0000 (0:00:00.663) 0:00:03.677 *********** 2025-05-06 00:59:18.506695 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:59:18.506709 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:59:18.506723 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:59:18.506737 | orchestrator | 2025-05-06 00:59:18.506750 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-06 00:59:18.506764 | orchestrator | Tuesday 06 May 2025 00:57:11 +0000 (0:00:00.276) 0:00:03.954 *********** 2025-05-06 00:59:18.506778 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:59:18.506792 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:59:18.506805 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:59:18.506819 | orchestrator | 2025-05-06 00:59:18.506833 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-06 00:59:18.506847 | orchestrator | Tuesday 06 May 2025 00:57:11 +0000 (0:00:00.737) 0:00:04.692 *********** 2025-05-06 00:59:18.506860 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:59:18.506893 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:59:18.506931 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:59:18.506946 | orchestrator | 2025-05-06 00:59:18.506962 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-06 00:59:18.506978 | orchestrator | Tuesday 06 May 2025 00:57:12 +0000 (0:00:00.300) 0:00:04.992 *********** 2025-05-06 00:59:18.506993 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:59:18.507009 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:59:18.507025 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:59:18.507041 | orchestrator | 2025-05-06 00:59:18.507056 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-06 00:59:18.507072 | orchestrator | Tuesday 06 May 2025 00:57:12 +0000 (0:00:00.285) 0:00:05.278 *********** 2025-05-06 00:59:18.507087 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:59:18.507102 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:59:18.507118 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:59:18.507133 | orchestrator | 2025-05-06 00:59:18.507149 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-06 00:59:18.507164 | orchestrator | Tuesday 06 May 2025 00:57:12 +0000 (0:00:00.297) 0:00:05.576 *********** 2025-05-06 00:59:18.507180 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.507198 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.507214 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.507228 | orchestrator | 2025-05-06 00:59:18.507241 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-06 00:59:18.507255 | orchestrator | Tuesday 06 May 2025 00:57:13 +0000 (0:00:00.435) 0:00:06.011 *********** 2025-05-06 00:59:18.507269 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:59:18.507283 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:59:18.507296 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:59:18.507311 | orchestrator | 2025-05-06 00:59:18.507324 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-06 00:59:18.507338 | orchestrator | Tuesday 06 May 2025 00:57:13 +0000 (0:00:00.275) 0:00:06.286 *********** 2025-05-06 00:59:18.507357 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-06 00:59:18.507372 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-06 00:59:18.507386 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-06 00:59:18.507399 | orchestrator | 2025-05-06 00:59:18.507413 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-06 00:59:18.507427 | orchestrator | Tuesday 06 May 2025 00:57:13 +0000 (0:00:00.636) 0:00:06.923 *********** 2025-05-06 00:59:18.507440 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:59:18.507454 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:59:18.507467 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:59:18.507481 | orchestrator | 2025-05-06 00:59:18.507519 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-06 00:59:18.507534 | orchestrator | Tuesday 06 May 2025 00:57:14 +0000 (0:00:00.414) 0:00:07.338 *********** 2025-05-06 00:59:18.507590 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-06 00:59:18.507607 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-06 00:59:18.507621 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-06 00:59:18.507635 | orchestrator | 2025-05-06 00:59:18.507649 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-06 00:59:18.507662 | orchestrator | Tuesday 06 May 2025 00:57:16 +0000 (0:00:02.250) 0:00:09.589 *********** 2025-05-06 00:59:18.507677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-06 00:59:18.507690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-06 00:59:18.507705 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-06 00:59:18.507719 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.507741 | orchestrator | 2025-05-06 00:59:18.507755 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-06 00:59:18.507769 | orchestrator | Tuesday 06 May 2025 00:57:17 +0000 (0:00:00.420) 0:00:10.009 *********** 2025-05-06 00:59:18.507784 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-06 00:59:18.507800 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-06 00:59:18.507815 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-06 00:59:18.507829 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.507843 | orchestrator | 2025-05-06 00:59:18.507857 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-06 00:59:18.507870 | orchestrator | Tuesday 06 May 2025 00:57:17 +0000 (0:00:00.753) 0:00:10.763 *********** 2025-05-06 00:59:18.507890 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-06 00:59:18.507906 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-06 00:59:18.507920 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-06 00:59:18.507935 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.507948 | orchestrator | 2025-05-06 00:59:18.507962 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-06 00:59:18.507976 | orchestrator | Tuesday 06 May 2025 00:57:18 +0000 (0:00:00.188) 0:00:10.952 *********** 2025-05-06 00:59:18.507992 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '6924cdc93e01', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-06 00:57:15.263546', 'end': '2025-05-06 00:57:15.294804', 'delta': '0:00:00.031258', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6924cdc93e01'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-06 00:59:18.508025 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '6081863ef374', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-06 00:57:15.796350', 'end': '2025-05-06 00:57:15.835526', 'delta': '0:00:00.039176', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6081863ef374'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-06 00:59:18.508047 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '9cca38efb257', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-06 00:57:16.324952', 'end': '2025-05-06 00:57:16.370493', 'delta': '0:00:00.045541', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9cca38efb257'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-06 00:59:18.508151 | orchestrator | 2025-05-06 00:59:18.508175 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-06 00:59:18.508263 | orchestrator | Tuesday 06 May 2025 00:57:18 +0000 (0:00:00.205) 0:00:11.157 *********** 2025-05-06 00:59:18.508282 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:59:18.508296 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:59:18.508310 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:59:18.508324 | orchestrator | 2025-05-06 00:59:18.508338 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-06 00:59:18.508352 | orchestrator | Tuesday 06 May 2025 00:57:18 +0000 (0:00:00.481) 0:00:11.639 *********** 2025-05-06 00:59:18.508366 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-06 00:59:18.508380 | orchestrator | 2025-05-06 00:59:18.508394 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-06 00:59:18.508408 | orchestrator | Tuesday 06 May 2025 00:57:20 +0000 (0:00:01.375) 0:00:13.014 *********** 2025-05-06 00:59:18.508422 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.508435 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.508450 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.508464 | orchestrator | 2025-05-06 00:59:18.508478 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-06 00:59:18.508542 | orchestrator | Tuesday 06 May 2025 00:57:20 +0000 (0:00:00.447) 0:00:13.462 *********** 2025-05-06 00:59:18.508557 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.508571 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.508584 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.508598 | orchestrator | 2025-05-06 00:59:18.508612 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-06 00:59:18.508626 | orchestrator | Tuesday 06 May 2025 00:57:20 +0000 (0:00:00.401) 0:00:13.863 *********** 2025-05-06 00:59:18.508640 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.508654 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.508667 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.508681 | orchestrator | 2025-05-06 00:59:18.508695 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-06 00:59:18.508709 | orchestrator | Tuesday 06 May 2025 00:57:21 +0000 (0:00:00.276) 0:00:14.140 *********** 2025-05-06 00:59:18.508723 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:59:18.508737 | orchestrator | 2025-05-06 00:59:18.508750 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-06 00:59:18.508764 | orchestrator | Tuesday 06 May 2025 00:57:21 +0000 (0:00:00.121) 0:00:14.261 *********** 2025-05-06 00:59:18.508778 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.508792 | orchestrator | 2025-05-06 00:59:18.508812 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-06 00:59:18.508835 | orchestrator | Tuesday 06 May 2025 00:57:21 +0000 (0:00:00.223) 0:00:14.485 *********** 2025-05-06 00:59:18.508849 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.508863 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.508877 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.508891 | orchestrator | 2025-05-06 00:59:18.508908 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-06 00:59:18.508923 | orchestrator | Tuesday 06 May 2025 00:57:22 +0000 (0:00:00.484) 0:00:14.969 *********** 2025-05-06 00:59:18.508939 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.508955 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.508970 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.508985 | orchestrator | 2025-05-06 00:59:18.509001 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-06 00:59:18.509016 | orchestrator | Tuesday 06 May 2025 00:57:22 +0000 (0:00:00.303) 0:00:15.273 *********** 2025-05-06 00:59:18.509032 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.509048 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.509062 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.509076 | orchestrator | 2025-05-06 00:59:18.509091 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-06 00:59:18.509105 | orchestrator | Tuesday 06 May 2025 00:57:22 +0000 (0:00:00.312) 0:00:15.586 *********** 2025-05-06 00:59:18.509120 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.509134 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.509156 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.509171 | orchestrator | 2025-05-06 00:59:18.509185 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-06 00:59:18.509199 | orchestrator | Tuesday 06 May 2025 00:57:22 +0000 (0:00:00.314) 0:00:15.900 *********** 2025-05-06 00:59:18.509213 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.509227 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.509241 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.509255 | orchestrator | 2025-05-06 00:59:18.509268 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-06 00:59:18.509280 | orchestrator | Tuesday 06 May 2025 00:57:23 +0000 (0:00:00.500) 0:00:16.401 *********** 2025-05-06 00:59:18.509292 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.509305 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.509322 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.509335 | orchestrator | 2025-05-06 00:59:18.509347 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-06 00:59:18.509360 | orchestrator | Tuesday 06 May 2025 00:57:23 +0000 (0:00:00.307) 0:00:16.708 *********** 2025-05-06 00:59:18.509372 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.509384 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.509396 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.509409 | orchestrator | 2025-05-06 00:59:18.509421 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-06 00:59:18.509433 | orchestrator | Tuesday 06 May 2025 00:57:24 +0000 (0:00:00.358) 0:00:17.067 *********** 2025-05-06 00:59:18.509448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--83550523--1175--5b11--b232--63a45b36e32a-osd--block--83550523--1175--5b11--b232--63a45b36e32a', 'dm-uuid-LVM-GgmBurLjrRojbuVdJgmdwztR3neYgf1c7Ki4DK6SlqESws0brjFgjWvn2dL4wKKq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2fbee355--69b3--5569--a73a--eae1d5356d34-osd--block--2fbee355--69b3--5569--a73a--eae1d5356d34', 'dm-uuid-LVM-jIAwNtMJkYPhxalyfQIKT0DJEOfCeYi271Yl41nyIgwU7qqsMM4cSNC8JeE5HLt6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509543 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509626 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a0f4265--dd5d--556c--ac35--a800ef93314e-osd--block--8a0f4265--dd5d--556c--ac35--a800ef93314e', 'dm-uuid-LVM-zuegJs53sNFcEk2Qr78Q7DBNbi7NmCWo8O9bST56x01qFU7kwxSq8ZPjRA11dqOE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0', 'scsi-SQEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0-part1', 'scsi-SQEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0-part14', 'scsi-SQEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0-part15', 'scsi-SQEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0-part16', 'scsi-SQEMU_QEMU_HARDDISK_b7536583-7396-4238-bfd9-176b53234dc0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:59:18.509664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--108592b4--5156--5470--952e--be389a9738cf-osd--block--108592b4--5156--5470--952e--be389a9738cf', 'dm-uuid-LVM-xsK2Ofv2ainQ3J0edqln2NvhPmXViG7NeYxpNg2B8MvLMGiCEiECcQx5j0MrUj9q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--83550523--1175--5b11--b232--63a45b36e32a-osd--block--83550523--1175--5b11--b232--63a45b36e32a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MwZmhh-rBzg-zyIr-Vk69-Pm39-fPKX-xz875U', 'scsi-0QEMU_QEMU_HARDDISK_8c0721df-98b6-45a8-8372-f184b99eacbe', 'scsi-SQEMU_QEMU_HARDDISK_8c0721df-98b6-45a8-8372-f184b99eacbe'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:59:18.509698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2fbee355--69b3--5569--a73a--eae1d5356d34-osd--block--2fbee355--69b3--5569--a73a--eae1d5356d34'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-c0zpup-1mYc-QbMy-SPRk-kJl2-ai3v-oQDtTa', 'scsi-0QEMU_QEMU_HARDDISK_cc7f276d-c2ba-4b91-9f6b-a505ec6ab98a', 'scsi-SQEMU_QEMU_HARDDISK_cc7f276d-c2ba-4b91-9f6b-a505ec6ab98a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:59:18.509724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e976783-2213-433c-91fb-66c729e68827', 'scsi-SQEMU_QEMU_HARDDISK_7e976783-2213-433c-91fb-66c729e68827'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:59:18.509738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-06-00-02-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:59:18.509770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5100a9d2--ae69--5e7a--989d--a5d69986fee9-osd--block--5100a9d2--ae69--5e7a--989d--a5d69986fee9', 'dm-uuid-LVM-x3exsVVRoVE9qjt2tke4ynGkNCRUsEUSIQaEXSrD3ztcPcJlyaxi7VTzV2THqjR0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509783 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.509796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--376b0c1a--f7d0--50df--9bf6--f05e021d85c5-osd--block--376b0c1a--f7d0--50df--9bf6--f05e021d85c5', 'dm-uuid-LVM-lwatybLHyBWLUDcfTzEaXxgm7hWkw4BeyA07WfaRC32N9BsmxD4KHdMKCrqzZ0dn'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509852 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509957 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.509976 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8', 'scsi-SQEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8-part1', 'scsi-SQEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8-part14', 'scsi-SQEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8-part15', 'scsi-SQEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8-part16', 'scsi-SQEMU_QEMU_HARDDISK_79d885cd-88d7-4c9f-ace5-7a5a5f31c1d8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': 2025-05-06 00:59:18 | INFO  | Task 78d7e0c2-6c0d-4de8-b313-100674c6bb08 is in state SUCCESS 2025-05-06 00:59:18.509996 | orchestrator | '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:59:18.510010 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8a0f4265--dd5d--556c--ac35--a800ef93314e-osd--block--8a0f4265--dd5d--556c--ac35--a800ef93314e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cshrKI-P5p2-b0PR-qB7W-hF2D-fccW-9tfpY1', 'scsi-0QEMU_QEMU_HARDDISK_c3e2c64f-9688-4cad-bb81-b3a7d150bd8b', 'scsi-SQEMU_QEMU_HARDDISK_c3e2c64f-9688-4cad-bb81-b3a7d150bd8b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:59:18.510059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.510072 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--108592b4--5156--5470--952e--be389a9738cf-osd--block--108592b4--5156--5470--952e--be389a9738cf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8vkhqM-Fm6b-yUju-i25w-b43v-w3ch-2kYXWZ', 'scsi-0QEMU_QEMU_HARDDISK_bc0c56a8-1377-4a36-857b-86c78b746055', 'scsi-SQEMU_QEMU_HARDDISK_bc0c56a8-1377-4a36-857b-86c78b746055'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:59:18.510085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.510099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eefa0fb1-6e32-4be6-9371-3c36667f9eb4', 'scsi-SQEMU_QEMU_HARDDISK_eefa0fb1-6e32-4be6-9371-3c36667f9eb4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:59:18.510116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:18.510136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-06-00-02-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:59:18.510150 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.510164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247', 'scsi-SQEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247-part1', 'scsi-SQEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247-part14', 'scsi-SQEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247-part15', 'scsi-SQEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247-part16', 'scsi-SQEMU_QEMU_HARDDISK_527d5616-4d3e-4454-846d-b66391bf5247-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:59:18.510186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5100a9d2--ae69--5e7a--989d--a5d69986fee9-osd--block--5100a9d2--ae69--5e7a--989d--a5d69986fee9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D6kocz-QmEq-jdH7-6rqs-amLw-Uefn-wjlZzF', 'scsi-0QEMU_QEMU_HARDDISK_9f4cae81-5600-43ad-ae81-4d2d3f64aa06', 'scsi-SQEMU_QEMU_HARDDISK_9f4cae81-5600-43ad-ae81-4d2d3f64aa06'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:59:18.510199 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--376b0c1a--f7d0--50df--9bf6--f05e021d85c5-osd--block--376b0c1a--f7d0--50df--9bf6--f05e021d85c5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9vQSwD-XmWH-MgjW-mM9S-SKdR-L0Gp-u9GUq6', 'scsi-0QEMU_QEMU_HARDDISK_a5a4c6fa-807d-44c7-a556-c4522912d679', 'scsi-SQEMU_QEMU_HARDDISK_a5a4c6fa-807d-44c7-a556-c4522912d679'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:59:18.510219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f2e4c6c8-e338-4410-96b4-d1d5dab5be16', 'scsi-SQEMU_QEMU_HARDDISK_f2e4c6c8-e338-4410-96b4-d1d5dab5be16'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:59:18.510236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-06-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:59:18.510261 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.510274 | orchestrator | 2025-05-06 00:59:18.510287 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-06 00:59:18.510299 | orchestrator | Tuesday 06 May 2025 00:57:24 +0000 (0:00:00.617) 0:00:17.684 *********** 2025-05-06 00:59:18.510311 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-06 00:59:18.510324 | orchestrator | 2025-05-06 00:59:18.510336 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-06 00:59:18.510348 | orchestrator | Tuesday 06 May 2025 00:57:25 +0000 (0:00:01.169) 0:00:18.853 *********** 2025-05-06 00:59:18.510360 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:59:18.510373 | orchestrator | 2025-05-06 00:59:18.510385 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-06 00:59:18.510397 | orchestrator | Tuesday 06 May 2025 00:57:26 +0000 (0:00:00.317) 0:00:19.170 *********** 2025-05-06 00:59:18.510409 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:59:18.510421 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:59:18.510434 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:59:18.510446 | orchestrator | 2025-05-06 00:59:18.510458 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-06 00:59:18.510470 | orchestrator | Tuesday 06 May 2025 00:57:26 +0000 (0:00:00.352) 0:00:19.523 *********** 2025-05-06 00:59:18.510482 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:59:18.510539 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:59:18.510552 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:59:18.510565 | orchestrator | 2025-05-06 00:59:18.510577 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-06 00:59:18.510587 | orchestrator | Tuesday 06 May 2025 00:57:27 +0000 (0:00:00.622) 0:00:20.146 *********** 2025-05-06 00:59:18.510597 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:59:18.510607 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:59:18.510617 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:59:18.510627 | orchestrator | 2025-05-06 00:59:18.510636 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-06 00:59:18.510646 | orchestrator | Tuesday 06 May 2025 00:57:27 +0000 (0:00:00.307) 0:00:20.454 *********** 2025-05-06 00:59:18.510656 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:59:18.510666 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:59:18.510676 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:59:18.510686 | orchestrator | 2025-05-06 00:59:18.510696 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-06 00:59:18.510706 | orchestrator | Tuesday 06 May 2025 00:57:28 +0000 (0:00:00.778) 0:00:21.232 *********** 2025-05-06 00:59:18.510715 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.510726 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.510736 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.510746 | orchestrator | 2025-05-06 00:59:18.510755 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-06 00:59:18.510770 | orchestrator | Tuesday 06 May 2025 00:57:28 +0000 (0:00:00.289) 0:00:21.522 *********** 2025-05-06 00:59:18.510780 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.510790 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.510800 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.510810 | orchestrator | 2025-05-06 00:59:18.510820 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-06 00:59:18.510830 | orchestrator | Tuesday 06 May 2025 00:57:29 +0000 (0:00:00.434) 0:00:21.957 *********** 2025-05-06 00:59:18.510846 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.510856 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.510866 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.510876 | orchestrator | 2025-05-06 00:59:18.510886 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-06 00:59:18.510896 | orchestrator | Tuesday 06 May 2025 00:57:29 +0000 (0:00:00.308) 0:00:22.266 *********** 2025-05-06 00:59:18.510906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-06 00:59:18.510916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-06 00:59:18.510926 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-06 00:59:18.510936 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.510946 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-06 00:59:18.510963 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-06 00:59:18.510973 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-06 00:59:18.510983 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-06 00:59:18.510993 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-06 00:59:18.511003 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.511013 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-06 00:59:18.511027 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.511038 | orchestrator | 2025-05-06 00:59:18.511048 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-06 00:59:18.511058 | orchestrator | Tuesday 06 May 2025 00:57:30 +0000 (0:00:01.421) 0:00:23.687 *********** 2025-05-06 00:59:18.511068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-06 00:59:18.511078 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-06 00:59:18.511088 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-06 00:59:18.511098 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-06 00:59:18.511108 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.511118 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-06 00:59:18.511128 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-06 00:59:18.511138 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-06 00:59:18.511148 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-06 00:59:18.511157 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.511167 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-06 00:59:18.511177 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.511187 | orchestrator | 2025-05-06 00:59:18.511198 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-06 00:59:18.511208 | orchestrator | Tuesday 06 May 2025 00:57:31 +0000 (0:00:00.749) 0:00:24.436 *********** 2025-05-06 00:59:18.511218 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-06 00:59:18.511228 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-06 00:59:18.511237 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-06 00:59:18.511247 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-06 00:59:18.511257 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-06 00:59:18.511267 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-06 00:59:18.511277 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-06 00:59:18.511287 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-06 00:59:18.511297 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-06 00:59:18.511307 | orchestrator | 2025-05-06 00:59:18.511318 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-06 00:59:18.511327 | orchestrator | Tuesday 06 May 2025 00:57:33 +0000 (0:00:01.688) 0:00:26.124 *********** 2025-05-06 00:59:18.511343 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-06 00:59:18.511353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-06 00:59:18.511363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-06 00:59:18.511373 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.511384 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-06 00:59:18.511394 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-06 00:59:18.511404 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-06 00:59:18.511413 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-06 00:59:18.511423 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.511433 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-06 00:59:18.511443 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-06 00:59:18.511453 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.511463 | orchestrator | 2025-05-06 00:59:18.511473 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-06 00:59:18.511483 | orchestrator | Tuesday 06 May 2025 00:57:33 +0000 (0:00:00.694) 0:00:26.819 *********** 2025-05-06 00:59:18.511505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-06 00:59:18.511515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-06 00:59:18.511524 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-06 00:59:18.511534 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.511544 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-06 00:59:18.511554 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-06 00:59:18.511564 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-06 00:59:18.511574 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.511584 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-06 00:59:18.511594 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-06 00:59:18.511604 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-06 00:59:18.511613 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.511791 | orchestrator | 2025-05-06 00:59:18.511803 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-06 00:59:18.511813 | orchestrator | Tuesday 06 May 2025 00:57:34 +0000 (0:00:00.518) 0:00:27.337 *********** 2025-05-06 00:59:18.511823 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-06 00:59:18.511834 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-06 00:59:18.511844 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-06 00:59:18.511855 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-06 00:59:18.511865 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-06 00:59:18.511875 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-06 00:59:18.511885 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.511900 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.511911 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-06 00:59:18.511921 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-06 00:59:18.511931 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-06 00:59:18.511941 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.511951 | orchestrator | 2025-05-06 00:59:18.511961 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-06 00:59:18.511977 | orchestrator | Tuesday 06 May 2025 00:57:34 +0000 (0:00:00.483) 0:00:27.821 *********** 2025-05-06 00:59:18.511987 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 00:59:18.511997 | orchestrator | 2025-05-06 00:59:18.512007 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-06 00:59:18.512018 | orchestrator | Tuesday 06 May 2025 00:57:35 +0000 (0:00:00.726) 0:00:28.547 *********** 2025-05-06 00:59:18.512028 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.512038 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.512047 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.512057 | orchestrator | 2025-05-06 00:59:18.512067 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-06 00:59:18.512077 | orchestrator | Tuesday 06 May 2025 00:57:35 +0000 (0:00:00.272) 0:00:28.819 *********** 2025-05-06 00:59:18.512087 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.512097 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.512107 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.512117 | orchestrator | 2025-05-06 00:59:18.512127 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-06 00:59:18.512137 | orchestrator | Tuesday 06 May 2025 00:57:36 +0000 (0:00:00.250) 0:00:29.070 *********** 2025-05-06 00:59:18.512146 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.512160 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.512170 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.512180 | orchestrator | 2025-05-06 00:59:18.512190 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-06 00:59:18.512200 | orchestrator | Tuesday 06 May 2025 00:57:36 +0000 (0:00:00.269) 0:00:29.340 *********** 2025-05-06 00:59:18.512210 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:59:18.512221 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:59:18.512231 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:59:18.512278 | orchestrator | 2025-05-06 00:59:18.512288 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-06 00:59:18.512298 | orchestrator | Tuesday 06 May 2025 00:57:36 +0000 (0:00:00.467) 0:00:29.807 *********** 2025-05-06 00:59:18.512308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:59:18.512318 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:59:18.512328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:59:18.512338 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.512348 | orchestrator | 2025-05-06 00:59:18.512358 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-06 00:59:18.512368 | orchestrator | Tuesday 06 May 2025 00:57:37 +0000 (0:00:00.326) 0:00:30.134 *********** 2025-05-06 00:59:18.512378 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:59:18.512388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:59:18.512402 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:59:18.512412 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.512422 | orchestrator | 2025-05-06 00:59:18.512432 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-06 00:59:18.512442 | orchestrator | Tuesday 06 May 2025 00:57:37 +0000 (0:00:00.356) 0:00:30.491 *********** 2025-05-06 00:59:18.512452 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:59:18.512462 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:59:18.512472 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:59:18.512482 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.512506 | orchestrator | 2025-05-06 00:59:18.512516 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-06 00:59:18.512530 | orchestrator | Tuesday 06 May 2025 00:57:37 +0000 (0:00:00.358) 0:00:30.850 *********** 2025-05-06 00:59:18.512547 | orchestrator | ok: [testbed-node-3] 2025-05-06 00:59:18.512557 | orchestrator | ok: [testbed-node-4] 2025-05-06 00:59:18.512567 | orchestrator | ok: [testbed-node-5] 2025-05-06 00:59:18.512577 | orchestrator | 2025-05-06 00:59:18.512587 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-06 00:59:18.512597 | orchestrator | Tuesday 06 May 2025 00:57:38 +0000 (0:00:00.275) 0:00:31.126 *********** 2025-05-06 00:59:18.512607 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-06 00:59:18.512617 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-06 00:59:18.512627 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-06 00:59:18.512637 | orchestrator | 2025-05-06 00:59:18.512647 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-06 00:59:18.512657 | orchestrator | Tuesday 06 May 2025 00:57:38 +0000 (0:00:00.490) 0:00:31.616 *********** 2025-05-06 00:59:18.512667 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.512677 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.512686 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.512696 | orchestrator | 2025-05-06 00:59:18.512706 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-06 00:59:18.512717 | orchestrator | Tuesday 06 May 2025 00:57:39 +0000 (0:00:00.396) 0:00:32.013 *********** 2025-05-06 00:59:18.512726 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.512737 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.512746 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.512757 | orchestrator | 2025-05-06 00:59:18.512771 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-06 00:59:18.512782 | orchestrator | Tuesday 06 May 2025 00:57:39 +0000 (0:00:00.297) 0:00:32.310 *********** 2025-05-06 00:59:18.512792 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-06 00:59:18.512802 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.512812 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-06 00:59:18.512822 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.512832 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-06 00:59:18.512842 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.512852 | orchestrator | 2025-05-06 00:59:18.512862 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-06 00:59:18.512872 | orchestrator | Tuesday 06 May 2025 00:57:39 +0000 (0:00:00.445) 0:00:32.756 *********** 2025-05-06 00:59:18.512882 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-06 00:59:18.512892 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.512902 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-06 00:59:18.512912 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.512922 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-06 00:59:18.512932 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.512942 | orchestrator | 2025-05-06 00:59:18.512952 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-06 00:59:18.512962 | orchestrator | Tuesday 06 May 2025 00:57:40 +0000 (0:00:00.305) 0:00:33.062 *********** 2025-05-06 00:59:18.512972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-06 00:59:18.512982 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-06 00:59:18.512992 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-06 00:59:18.513002 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.513012 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-06 00:59:18.513022 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-06 00:59:18.513032 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-06 00:59:18.513047 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-06 00:59:18.513058 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.513068 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-06 00:59:18.513077 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-06 00:59:18.513087 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.513097 | orchestrator | 2025-05-06 00:59:18.513107 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-06 00:59:18.513117 | orchestrator | Tuesday 06 May 2025 00:57:41 +0000 (0:00:01.063) 0:00:34.125 *********** 2025-05-06 00:59:18.513127 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.513138 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.513148 | orchestrator | skipping: [testbed-node-5] 2025-05-06 00:59:18.513158 | orchestrator | 2025-05-06 00:59:18.513168 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-06 00:59:18.513178 | orchestrator | Tuesday 06 May 2025 00:57:41 +0000 (0:00:00.251) 0:00:34.377 *********** 2025-05-06 00:59:18.513188 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-06 00:59:18.513198 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-06 00:59:18.513207 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-06 00:59:18.513217 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-06 00:59:18.513228 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-06 00:59:18.513238 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-06 00:59:18.513247 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-06 00:59:18.513257 | orchestrator | 2025-05-06 00:59:18.513267 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-06 00:59:18.513277 | orchestrator | Tuesday 06 May 2025 00:57:42 +0000 (0:00:00.924) 0:00:35.302 *********** 2025-05-06 00:59:18.513287 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-06 00:59:18.513297 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-06 00:59:18.513307 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-06 00:59:18.513317 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-06 00:59:18.513327 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-06 00:59:18.513337 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-06 00:59:18.513347 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-06 00:59:18.513357 | orchestrator | 2025-05-06 00:59:18.513367 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-06 00:59:18.513380 | orchestrator | Tuesday 06 May 2025 00:57:43 +0000 (0:00:01.429) 0:00:36.731 *********** 2025-05-06 00:59:18.513390 | orchestrator | skipping: [testbed-node-3] 2025-05-06 00:59:18.513400 | orchestrator | skipping: [testbed-node-4] 2025-05-06 00:59:18.513411 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-06 00:59:18.513421 | orchestrator | 2025-05-06 00:59:18.513435 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-06 00:59:18.513446 | orchestrator | Tuesday 06 May 2025 00:57:44 +0000 (0:00:00.730) 0:00:37.461 *********** 2025-05-06 00:59:18.513457 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-06 00:59:18.513475 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-06 00:59:18.513521 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-06 00:59:18.513533 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-06 00:59:18.513543 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-06 00:59:18.513553 | orchestrator | 2025-05-06 00:59:18.513563 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-06 00:59:18.513573 | orchestrator | Tuesday 06 May 2025 00:58:24 +0000 (0:00:40.051) 0:01:17.512 *********** 2025-05-06 00:59:18.513583 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:59:18.513593 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:59:18.513603 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:59:18.513613 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:59:18.513623 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:59:18.513633 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:59:18.513643 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-06 00:59:18.513653 | orchestrator | 2025-05-06 00:59:18.513663 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-06 00:59:18.513673 | orchestrator | Tuesday 06 May 2025 00:58:45 +0000 (0:00:21.365) 0:01:38.877 *********** 2025-05-06 00:59:18.513683 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:59:18.513693 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:59:18.513703 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:59:18.513713 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:59:18.513723 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:59:18.513733 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:59:18.513743 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-06 00:59:18.513753 | orchestrator | 2025-05-06 00:59:18.513763 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-06 00:59:18.513773 | orchestrator | Tuesday 06 May 2025 00:58:56 +0000 (0:00:10.805) 0:01:49.683 *********** 2025-05-06 00:59:18.513783 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:59:18.513793 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-06 00:59:18.513807 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-06 00:59:18.513817 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:59:18.513827 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-06 00:59:18.513842 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-06 00:59:18.513852 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:59:18.513862 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-06 00:59:18.513873 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-06 00:59:18.513888 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:59:21.568116 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-06 00:59:21.568273 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-06 00:59:21.568294 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:59:21.568310 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-06 00:59:21.568324 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-06 00:59:21.568339 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-06 00:59:21.568352 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-06 00:59:21.568366 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-06 00:59:21.568381 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-06 00:59:21.568395 | orchestrator | 2025-05-06 00:59:21.568410 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:59:21.568426 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-06 00:59:21.568441 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-05-06 00:59:21.568457 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-05-06 00:59:21.568471 | orchestrator | 2025-05-06 00:59:21.568512 | orchestrator | 2025-05-06 00:59:21.568528 | orchestrator | 2025-05-06 00:59:21.568542 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 00:59:21.568556 | orchestrator | Tuesday 06 May 2025 00:59:15 +0000 (0:00:18.545) 0:02:08.228 *********** 2025-05-06 00:59:21.568569 | orchestrator | =============================================================================== 2025-05-06 00:59:21.568583 | orchestrator | create openstack pool(s) ----------------------------------------------- 40.05s 2025-05-06 00:59:21.568597 | orchestrator | generate keys ---------------------------------------------------------- 21.37s 2025-05-06 00:59:21.568611 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.55s 2025-05-06 00:59:21.568624 | orchestrator | get keys from monitors ------------------------------------------------- 10.81s 2025-05-06 00:59:21.568639 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.25s 2025-05-06 00:59:21.568656 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.69s 2025-05-06 00:59:21.568671 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.43s 2025-05-06 00:59:21.568687 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 1.42s 2025-05-06 00:59:21.568703 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.38s 2025-05-06 00:59:21.568720 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.17s 2025-05-06 00:59:21.568736 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 1.06s 2025-05-06 00:59:21.568751 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.92s 2025-05-06 00:59:21.568768 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.78s 2025-05-06 00:59:21.568806 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.75s 2025-05-06 00:59:21.568823 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.75s 2025-05-06 00:59:21.568839 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.75s 2025-05-06 00:59:21.568855 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.74s 2025-05-06 00:59:21.568871 | orchestrator | Include tasks from the ceph-osd role ------------------------------------ 0.73s 2025-05-06 00:59:21.568887 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 0.73s 2025-05-06 00:59:21.568902 | orchestrator | ceph-facts : include facts.yml ------------------------------------------ 0.70s 2025-05-06 00:59:21.568918 | orchestrator | 2025-05-06 00:59:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:59:21.568935 | orchestrator | 2025-05-06 00:59:18 | INFO  | Task 2484f385-c3d9-4778-8195-096b86868c7b is in state STARTED 2025-05-06 00:59:21.568949 | orchestrator | 2025-05-06 00:59:18 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:59:21.568963 | orchestrator | 2025-05-06 00:59:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:59:21.568995 | orchestrator | 2025-05-06 00:59:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:59:21.569609 | orchestrator | 2025-05-06 00:59:21 | INFO  | Task 2484f385-c3d9-4778-8195-096b86868c7b is in state STARTED 2025-05-06 00:59:21.570935 | orchestrator | 2025-05-06 00:59:21 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:59:24.617536 | orchestrator | 2025-05-06 00:59:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:59:24.617679 | orchestrator | 2025-05-06 00:59:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:59:24.618289 | orchestrator | 2025-05-06 00:59:24 | INFO  | Task 2484f385-c3d9-4778-8195-096b86868c7b is in state STARTED 2025-05-06 00:59:24.618944 | orchestrator | 2025-05-06 00:59:24 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:59:24.619097 | orchestrator | 2025-05-06 00:59:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:59:27.671678 | orchestrator | 2025-05-06 00:59:27 | INFO  | Task ebb77e1b-d0e4-4cd0-90c7-2bedff691a9f is in state STARTED 2025-05-06 00:59:27.672741 | orchestrator | 2025-05-06 00:59:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:59:27.674733 | orchestrator | 2025-05-06 00:59:27 | INFO  | Task 2484f385-c3d9-4778-8195-096b86868c7b is in state STARTED 2025-05-06 00:59:27.676526 | orchestrator | 2025-05-06 00:59:27 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:59:30.733948 | orchestrator | 2025-05-06 00:59:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:59:30.734177 | orchestrator | 2025-05-06 00:59:30 | INFO  | Task ebb77e1b-d0e4-4cd0-90c7-2bedff691a9f is in state STARTED 2025-05-06 00:59:30.734459 | orchestrator | 2025-05-06 00:59:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:59:30.736079 | orchestrator | 2025-05-06 00:59:30 | INFO  | Task 2484f385-c3d9-4778-8195-096b86868c7b is in state STARTED 2025-05-06 00:59:30.737689 | orchestrator | 2025-05-06 00:59:30 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:59:30.737907 | orchestrator | 2025-05-06 00:59:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:59:33.791517 | orchestrator | 2025-05-06 00:59:33 | INFO  | Task ebb77e1b-d0e4-4cd0-90c7-2bedff691a9f is in state STARTED 2025-05-06 00:59:33.792563 | orchestrator | 2025-05-06 00:59:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:59:33.793995 | orchestrator | 2025-05-06 00:59:33 | INFO  | Task 2484f385-c3d9-4778-8195-096b86868c7b is in state STARTED 2025-05-06 00:59:33.795048 | orchestrator | 2025-05-06 00:59:33 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:59:36.853510 | orchestrator | 2025-05-06 00:59:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:59:36.853654 | orchestrator | 2025-05-06 00:59:36 | INFO  | Task ebb77e1b-d0e4-4cd0-90c7-2bedff691a9f is in state STARTED 2025-05-06 00:59:36.855184 | orchestrator | 2025-05-06 00:59:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:59:36.858066 | orchestrator | 2025-05-06 00:59:36 | INFO  | Task 2484f385-c3d9-4778-8195-096b86868c7b is in state STARTED 2025-05-06 00:59:36.859686 | orchestrator | 2025-05-06 00:59:36 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:59:39.903988 | orchestrator | 2025-05-06 00:59:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:59:39.904137 | orchestrator | 2025-05-06 00:59:39 | INFO  | Task ebb77e1b-d0e4-4cd0-90c7-2bedff691a9f is in state STARTED 2025-05-06 00:59:39.904710 | orchestrator | 2025-05-06 00:59:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:59:39.906168 | orchestrator | 2025-05-06 00:59:39 | INFO  | Task 2484f385-c3d9-4778-8195-096b86868c7b is in state STARTED 2025-05-06 00:59:39.907907 | orchestrator | 2025-05-06 00:59:39 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:59:42.962422 | orchestrator | 2025-05-06 00:59:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:59:42.962608 | orchestrator | 2025-05-06 00:59:42 | INFO  | Task ebb77e1b-d0e4-4cd0-90c7-2bedff691a9f is in state STARTED 2025-05-06 00:59:42.965013 | orchestrator | 2025-05-06 00:59:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:59:42.967025 | orchestrator | 2025-05-06 00:59:42 | INFO  | Task 2484f385-c3d9-4778-8195-096b86868c7b is in state STARTED 2025-05-06 00:59:42.969098 | orchestrator | 2025-05-06 00:59:42 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:59:46.020253 | orchestrator | 2025-05-06 00:59:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:59:46.020400 | orchestrator | 2025-05-06 00:59:46 | INFO  | Task ebb77e1b-d0e4-4cd0-90c7-2bedff691a9f is in state STARTED 2025-05-06 00:59:46.022616 | orchestrator | 2025-05-06 00:59:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:59:46.024277 | orchestrator | 2025-05-06 00:59:46 | INFO  | Task 2484f385-c3d9-4778-8195-096b86868c7b is in state STARTED 2025-05-06 00:59:46.025932 | orchestrator | 2025-05-06 00:59:46 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:59:49.071786 | orchestrator | 2025-05-06 00:59:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:59:49.071923 | orchestrator | 2025-05-06 00:59:49 | INFO  | Task ebb77e1b-d0e4-4cd0-90c7-2bedff691a9f is in state STARTED 2025-05-06 00:59:49.073836 | orchestrator | 2025-05-06 00:59:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:59:49.075259 | orchestrator | 2025-05-06 00:59:49 | INFO  | Task 2484f385-c3d9-4778-8195-096b86868c7b is in state STARTED 2025-05-06 00:59:49.076705 | orchestrator | 2025-05-06 00:59:49 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:59:52.129404 | orchestrator | 2025-05-06 00:59:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:59:52.129596 | orchestrator | 2025-05-06 00:59:52 | INFO  | Task ebb77e1b-d0e4-4cd0-90c7-2bedff691a9f is in state STARTED 2025-05-06 00:59:52.131096 | orchestrator | 2025-05-06 00:59:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:59:52.132968 | orchestrator | 2025-05-06 00:59:52 | INFO  | Task 2484f385-c3d9-4778-8195-096b86868c7b is in state STARTED 2025-05-06 00:59:52.134310 | orchestrator | 2025-05-06 00:59:52 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state STARTED 2025-05-06 00:59:55.189075 | orchestrator | 2025-05-06 00:59:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:59:55.189226 | orchestrator | 2025-05-06 00:59:55.189249 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-06 00:59:55.189264 | orchestrator | 2025-05-06 00:59:55.189339 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-05-06 00:59:55.189433 | orchestrator | 2025-05-06 00:59:55.189485 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-06 00:59:55.189501 | orchestrator | Tuesday 06 May 2025 00:59:27 +0000 (0:00:00.448) 0:00:00.448 *********** 2025-05-06 00:59:55.189515 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-05-06 00:59:55.189531 | orchestrator | 2025-05-06 00:59:55.189545 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-06 00:59:55.189560 | orchestrator | Tuesday 06 May 2025 00:59:27 +0000 (0:00:00.218) 0:00:00.666 *********** 2025-05-06 00:59:55.189574 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-06 00:59:55.189589 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-06 00:59:55.189603 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-06 00:59:55.189617 | orchestrator | 2025-05-06 00:59:55.189631 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-06 00:59:55.189645 | orchestrator | Tuesday 06 May 2025 00:59:28 +0000 (0:00:00.805) 0:00:01.471 *********** 2025-05-06 00:59:55.189659 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-05-06 00:59:55.189673 | orchestrator | 2025-05-06 00:59:55.189687 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-06 00:59:55.189701 | orchestrator | Tuesday 06 May 2025 00:59:28 +0000 (0:00:00.227) 0:00:01.699 *********** 2025-05-06 00:59:55.189715 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:55.189730 | orchestrator | 2025-05-06 00:59:55.189744 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-06 00:59:55.189758 | orchestrator | Tuesday 06 May 2025 00:59:29 +0000 (0:00:00.627) 0:00:02.327 *********** 2025-05-06 00:59:55.189772 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:55.189786 | orchestrator | 2025-05-06 00:59:55.189800 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-06 00:59:55.189814 | orchestrator | Tuesday 06 May 2025 00:59:29 +0000 (0:00:00.135) 0:00:02.462 *********** 2025-05-06 00:59:55.189828 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:55.189842 | orchestrator | 2025-05-06 00:59:55.189856 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-06 00:59:55.189870 | orchestrator | Tuesday 06 May 2025 00:59:29 +0000 (0:00:00.481) 0:00:02.944 *********** 2025-05-06 00:59:55.189884 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:55.189898 | orchestrator | 2025-05-06 00:59:55.189911 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-06 00:59:55.189925 | orchestrator | Tuesday 06 May 2025 00:59:29 +0000 (0:00:00.136) 0:00:03.080 *********** 2025-05-06 00:59:55.189939 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:55.189953 | orchestrator | 2025-05-06 00:59:55.189967 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-06 00:59:55.190008 | orchestrator | Tuesday 06 May 2025 00:59:30 +0000 (0:00:00.135) 0:00:03.216 *********** 2025-05-06 00:59:55.190088 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:55.190104 | orchestrator | 2025-05-06 00:59:55.190121 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-06 00:59:55.190138 | orchestrator | Tuesday 06 May 2025 00:59:30 +0000 (0:00:00.156) 0:00:03.372 *********** 2025-05-06 00:59:55.190153 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.190170 | orchestrator | 2025-05-06 00:59:55.190187 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-06 00:59:55.190200 | orchestrator | Tuesday 06 May 2025 00:59:30 +0000 (0:00:00.145) 0:00:03.518 *********** 2025-05-06 00:59:55.190214 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:55.190228 | orchestrator | 2025-05-06 00:59:55.190241 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-06 00:59:55.190255 | orchestrator | Tuesday 06 May 2025 00:59:30 +0000 (0:00:00.297) 0:00:03.816 *********** 2025-05-06 00:59:55.190269 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-06 00:59:55.190284 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-06 00:59:55.190297 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-06 00:59:55.190311 | orchestrator | 2025-05-06 00:59:55.190325 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-06 00:59:55.190339 | orchestrator | Tuesday 06 May 2025 00:59:31 +0000 (0:00:00.646) 0:00:04.463 *********** 2025-05-06 00:59:55.190352 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:55.190366 | orchestrator | 2025-05-06 00:59:55.190380 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-06 00:59:55.190401 | orchestrator | Tuesday 06 May 2025 00:59:31 +0000 (0:00:00.238) 0:00:04.701 *********** 2025-05-06 00:59:55.190415 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-06 00:59:55.190430 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-06 00:59:55.190475 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-06 00:59:55.190490 | orchestrator | 2025-05-06 00:59:55.190505 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-06 00:59:55.190518 | orchestrator | Tuesday 06 May 2025 00:59:33 +0000 (0:00:01.893) 0:00:06.595 *********** 2025-05-06 00:59:55.190532 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-06 00:59:55.190552 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-06 00:59:55.190575 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-06 00:59:55.190589 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.190603 | orchestrator | 2025-05-06 00:59:55.190617 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-06 00:59:55.190643 | orchestrator | Tuesday 06 May 2025 00:59:33 +0000 (0:00:00.410) 0:00:07.005 *********** 2025-05-06 00:59:55.190664 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-06 00:59:55.190681 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-06 00:59:55.190695 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-06 00:59:55.190710 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.190724 | orchestrator | 2025-05-06 00:59:55.190738 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-06 00:59:55.190762 | orchestrator | Tuesday 06 May 2025 00:59:34 +0000 (0:00:00.777) 0:00:07.783 *********** 2025-05-06 00:59:55.190777 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-06 00:59:55.190794 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-06 00:59:55.190808 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-06 00:59:55.190822 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.190836 | orchestrator | 2025-05-06 00:59:55.190850 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-06 00:59:55.190864 | orchestrator | Tuesday 06 May 2025 00:59:34 +0000 (0:00:00.195) 0:00:07.979 *********** 2025-05-06 00:59:55.190883 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '6924cdc93e01', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-06 00:59:32.238304', 'end': '2025-05-06 00:59:32.287320', 'delta': '0:00:00.049016', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6924cdc93e01'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-06 00:59:55.190902 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '6081863ef374', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-06 00:59:32.776191', 'end': '2025-05-06 00:59:32.813491', 'delta': '0:00:00.037300', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6081863ef374'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-06 00:59:55.190930 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '9cca38efb257', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-06 00:59:33.274541', 'end': '2025-05-06 00:59:33.315919', 'delta': '0:00:00.041378', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9cca38efb257'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-06 00:59:55.190957 | orchestrator | 2025-05-06 00:59:55.190972 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-06 00:59:55.190986 | orchestrator | Tuesday 06 May 2025 00:59:35 +0000 (0:00:00.197) 0:00:08.176 *********** 2025-05-06 00:59:55.191000 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:55.191014 | orchestrator | 2025-05-06 00:59:55.191028 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-06 00:59:55.191042 | orchestrator | Tuesday 06 May 2025 00:59:35 +0000 (0:00:00.232) 0:00:08.408 *********** 2025-05-06 00:59:55.191055 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-05-06 00:59:55.191069 | orchestrator | 2025-05-06 00:59:55.191083 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-06 00:59:55.191097 | orchestrator | Tuesday 06 May 2025 00:59:36 +0000 (0:00:01.643) 0:00:10.051 *********** 2025-05-06 00:59:55.191110 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.191128 | orchestrator | 2025-05-06 00:59:55.191148 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-06 00:59:55.191162 | orchestrator | Tuesday 06 May 2025 00:59:37 +0000 (0:00:00.143) 0:00:10.195 *********** 2025-05-06 00:59:55.191175 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.191189 | orchestrator | 2025-05-06 00:59:55.191203 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-06 00:59:55.191217 | orchestrator | Tuesday 06 May 2025 00:59:37 +0000 (0:00:00.202) 0:00:10.398 *********** 2025-05-06 00:59:55.191231 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.191245 | orchestrator | 2025-05-06 00:59:55.191259 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-06 00:59:55.191272 | orchestrator | Tuesday 06 May 2025 00:59:37 +0000 (0:00:00.143) 0:00:10.541 *********** 2025-05-06 00:59:55.191286 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:55.191301 | orchestrator | 2025-05-06 00:59:55.191314 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-06 00:59:55.191331 | orchestrator | Tuesday 06 May 2025 00:59:37 +0000 (0:00:00.123) 0:00:10.664 *********** 2025-05-06 00:59:55.191352 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.191375 | orchestrator | 2025-05-06 00:59:55.191390 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-06 00:59:55.191404 | orchestrator | Tuesday 06 May 2025 00:59:37 +0000 (0:00:00.229) 0:00:10.894 *********** 2025-05-06 00:59:55.191418 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.191433 | orchestrator | 2025-05-06 00:59:55.191464 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-06 00:59:55.191479 | orchestrator | Tuesday 06 May 2025 00:59:37 +0000 (0:00:00.132) 0:00:11.026 *********** 2025-05-06 00:59:55.191493 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.191508 | orchestrator | 2025-05-06 00:59:55.191522 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-06 00:59:55.191536 | orchestrator | Tuesday 06 May 2025 00:59:38 +0000 (0:00:00.128) 0:00:11.155 *********** 2025-05-06 00:59:55.191549 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.191563 | orchestrator | 2025-05-06 00:59:55.191578 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-06 00:59:55.191607 | orchestrator | Tuesday 06 May 2025 00:59:38 +0000 (0:00:00.122) 0:00:11.278 *********** 2025-05-06 00:59:55.191622 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.191636 | orchestrator | 2025-05-06 00:59:55.191650 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-06 00:59:55.191664 | orchestrator | Tuesday 06 May 2025 00:59:38 +0000 (0:00:00.126) 0:00:11.404 *********** 2025-05-06 00:59:55.191678 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.191692 | orchestrator | 2025-05-06 00:59:55.191706 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-06 00:59:55.191719 | orchestrator | Tuesday 06 May 2025 00:59:38 +0000 (0:00:00.300) 0:00:11.705 *********** 2025-05-06 00:59:55.191733 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.191754 | orchestrator | 2025-05-06 00:59:55.191768 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-06 00:59:55.191782 | orchestrator | Tuesday 06 May 2025 00:59:38 +0000 (0:00:00.132) 0:00:11.838 *********** 2025-05-06 00:59:55.191799 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.191818 | orchestrator | 2025-05-06 00:59:55.191833 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-06 00:59:55.191846 | orchestrator | Tuesday 06 May 2025 00:59:38 +0000 (0:00:00.129) 0:00:11.967 *********** 2025-05-06 00:59:55.191860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:55.191882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:55.191897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:55.191912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:55.191926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:55.191945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:55.191959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:55.191974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-06 00:59:55.192005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_971680de-ee79-4aff-976e-b13f7aba5834', 'scsi-SQEMU_QEMU_HARDDISK_971680de-ee79-4aff-976e-b13f7aba5834'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_971680de-ee79-4aff-976e-b13f7aba5834-part1', 'scsi-SQEMU_QEMU_HARDDISK_971680de-ee79-4aff-976e-b13f7aba5834-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_971680de-ee79-4aff-976e-b13f7aba5834-part14', 'scsi-SQEMU_QEMU_HARDDISK_971680de-ee79-4aff-976e-b13f7aba5834-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_971680de-ee79-4aff-976e-b13f7aba5834-part15', 'scsi-SQEMU_QEMU_HARDDISK_971680de-ee79-4aff-976e-b13f7aba5834-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_971680de-ee79-4aff-976e-b13f7aba5834-part16', 'scsi-SQEMU_QEMU_HARDDISK_971680de-ee79-4aff-976e-b13f7aba5834-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:59:55.192032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7066bed1-b6f5-4fc6-91d4-16dfe41e1882', 'scsi-SQEMU_QEMU_HARDDISK_7066bed1-b6f5-4fc6-91d4-16dfe41e1882'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:59:55.192049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_db071690-0f8e-4535-a70c-dc0b8d604c8e', 'scsi-SQEMU_QEMU_HARDDISK_db071690-0f8e-4535-a70c-dc0b8d604c8e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:59:55.192064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e73239c-12d8-4b54-bea1-88c93f0679a4', 'scsi-SQEMU_QEMU_HARDDISK_1e73239c-12d8-4b54-bea1-88c93f0679a4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:59:55.192085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-06-00-02-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-06 00:59:55.192101 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.192115 | orchestrator | 2025-05-06 00:59:55.192129 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-06 00:59:55.192144 | orchestrator | Tuesday 06 May 2025 00:59:39 +0000 (0:00:00.256) 0:00:12.224 *********** 2025-05-06 00:59:55.192158 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.192172 | orchestrator | 2025-05-06 00:59:55.192186 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-06 00:59:55.192200 | orchestrator | Tuesday 06 May 2025 00:59:39 +0000 (0:00:00.255) 0:00:12.480 *********** 2025-05-06 00:59:55.192214 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.192228 | orchestrator | 2025-05-06 00:59:55.192241 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-06 00:59:55.192255 | orchestrator | Tuesday 06 May 2025 00:59:39 +0000 (0:00:00.129) 0:00:12.609 *********** 2025-05-06 00:59:55.192278 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.192293 | orchestrator | 2025-05-06 00:59:55.192306 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-06 00:59:55.192320 | orchestrator | Tuesday 06 May 2025 00:59:39 +0000 (0:00:00.123) 0:00:12.732 *********** 2025-05-06 00:59:55.192340 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:55.192355 | orchestrator | 2025-05-06 00:59:55.192369 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-06 00:59:55.192383 | orchestrator | Tuesday 06 May 2025 00:59:40 +0000 (0:00:00.518) 0:00:13.251 *********** 2025-05-06 00:59:55.192396 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:55.192410 | orchestrator | 2025-05-06 00:59:55.192424 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-06 00:59:55.192437 | orchestrator | Tuesday 06 May 2025 00:59:40 +0000 (0:00:00.116) 0:00:13.368 *********** 2025-05-06 00:59:55.192515 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:55.192530 | orchestrator | 2025-05-06 00:59:55.192544 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-06 00:59:55.192558 | orchestrator | Tuesday 06 May 2025 00:59:40 +0000 (0:00:00.493) 0:00:13.861 *********** 2025-05-06 00:59:55.192572 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:55.192586 | orchestrator | 2025-05-06 00:59:55.192599 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-06 00:59:55.192613 | orchestrator | Tuesday 06 May 2025 00:59:41 +0000 (0:00:00.328) 0:00:14.189 *********** 2025-05-06 00:59:55.192627 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.192640 | orchestrator | 2025-05-06 00:59:55.192654 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-06 00:59:55.192676 | orchestrator | Tuesday 06 May 2025 00:59:41 +0000 (0:00:00.262) 0:00:14.451 *********** 2025-05-06 00:59:55.192690 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.192704 | orchestrator | 2025-05-06 00:59:55.192718 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-06 00:59:55.192732 | orchestrator | Tuesday 06 May 2025 00:59:41 +0000 (0:00:00.128) 0:00:14.580 *********** 2025-05-06 00:59:55.192745 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-06 00:59:55.192759 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-06 00:59:55.192773 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-06 00:59:55.192792 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.192805 | orchestrator | 2025-05-06 00:59:55.192817 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-06 00:59:55.192830 | orchestrator | Tuesday 06 May 2025 00:59:41 +0000 (0:00:00.450) 0:00:15.030 *********** 2025-05-06 00:59:55.192842 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-06 00:59:55.192854 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-06 00:59:55.192867 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-06 00:59:55.192879 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.192891 | orchestrator | 2025-05-06 00:59:55.192903 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-06 00:59:55.192920 | orchestrator | Tuesday 06 May 2025 00:59:42 +0000 (0:00:00.474) 0:00:15.504 *********** 2025-05-06 00:59:55.192933 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-06 00:59:55.192946 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-06 00:59:55.192958 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-06 00:59:55.192970 | orchestrator | 2025-05-06 00:59:55.192983 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-06 00:59:55.192996 | orchestrator | Tuesday 06 May 2025 00:59:43 +0000 (0:00:01.074) 0:00:16.579 *********** 2025-05-06 00:59:55.193008 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-06 00:59:55.193020 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-06 00:59:55.193032 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-06 00:59:55.193045 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.193057 | orchestrator | 2025-05-06 00:59:55.193069 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-06 00:59:55.193081 | orchestrator | Tuesday 06 May 2025 00:59:43 +0000 (0:00:00.213) 0:00:16.793 *********** 2025-05-06 00:59:55.193094 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-06 00:59:55.193106 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-06 00:59:55.193118 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-06 00:59:55.193130 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.193142 | orchestrator | 2025-05-06 00:59:55.193154 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-06 00:59:55.193167 | orchestrator | Tuesday 06 May 2025 00:59:43 +0000 (0:00:00.211) 0:00:17.004 *********** 2025-05-06 00:59:55.193179 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-06 00:59:55.193191 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-06 00:59:55.193203 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-06 00:59:55.193216 | orchestrator | 2025-05-06 00:59:55.193228 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-06 00:59:55.193250 | orchestrator | Tuesday 06 May 2025 00:59:44 +0000 (0:00:00.196) 0:00:17.201 *********** 2025-05-06 00:59:55.193262 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.193275 | orchestrator | 2025-05-06 00:59:55.193287 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-06 00:59:55.193299 | orchestrator | Tuesday 06 May 2025 00:59:44 +0000 (0:00:00.127) 0:00:17.328 *********** 2025-05-06 00:59:55.193312 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.193324 | orchestrator | 2025-05-06 00:59:55.193336 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-06 00:59:55.193348 | orchestrator | Tuesday 06 May 2025 00:59:44 +0000 (0:00:00.372) 0:00:17.701 *********** 2025-05-06 00:59:55.193361 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-06 00:59:55.193379 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-06 00:59:55.193399 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-06 00:59:55.193419 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-06 00:59:55.193432 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-06 00:59:55.193485 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-06 00:59:55.193500 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-06 00:59:55.193512 | orchestrator | 2025-05-06 00:59:55.193525 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-06 00:59:55.193537 | orchestrator | Tuesday 06 May 2025 00:59:45 +0000 (0:00:00.817) 0:00:18.519 *********** 2025-05-06 00:59:55.193550 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-06 00:59:55.193563 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-06 00:59:55.193575 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-06 00:59:55.193587 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-06 00:59:55.193600 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-06 00:59:55.193616 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-06 00:59:55.193633 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-06 00:59:55.193646 | orchestrator | 2025-05-06 00:59:55.193658 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-05-06 00:59:55.193670 | orchestrator | Tuesday 06 May 2025 00:59:46 +0000 (0:00:01.444) 0:00:19.963 *********** 2025-05-06 00:59:55.193682 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:55.193695 | orchestrator | 2025-05-06 00:59:55.193707 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-05-06 00:59:55.193720 | orchestrator | Tuesday 06 May 2025 00:59:47 +0000 (0:00:00.446) 0:00:20.410 *********** 2025-05-06 00:59:55.193732 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-06 00:59:55.193744 | orchestrator | 2025-05-06 00:59:55.193757 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-05-06 00:59:55.193775 | orchestrator | Tuesday 06 May 2025 00:59:47 +0000 (0:00:00.548) 0:00:20.959 *********** 2025-05-06 00:59:55.193788 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-05-06 00:59:55.193800 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-05-06 00:59:55.193813 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-05-06 00:59:55.193825 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-05-06 00:59:55.193837 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-05-06 00:59:55.193849 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-05-06 00:59:55.193861 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-05-06 00:59:55.193873 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-05-06 00:59:55.193886 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-05-06 00:59:55.193898 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-05-06 00:59:55.193910 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-05-06 00:59:55.193923 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-05-06 00:59:55.193935 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-05-06 00:59:55.193958 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-05-06 00:59:55.193975 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-05-06 00:59:55.193988 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-05-06 00:59:55.194000 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-05-06 00:59:55.194011 | orchestrator | 2025-05-06 00:59:55.194046 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:59:55.194057 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-06 00:59:55.194068 | orchestrator | 2025-05-06 00:59:55.194078 | orchestrator | 2025-05-06 00:59:55.194088 | orchestrator | 2025-05-06 00:59:55.194098 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 00:59:55.194108 | orchestrator | Tuesday 06 May 2025 00:59:54 +0000 (0:00:06.393) 0:00:27.353 *********** 2025-05-06 00:59:55.194118 | orchestrator | =============================================================================== 2025-05-06 00:59:55.194129 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 6.39s 2025-05-06 00:59:55.194139 | orchestrator | ceph-facts : find a running mon container ------------------------------- 1.89s 2025-05-06 00:59:55.194149 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.64s 2025-05-06 00:59:55.194165 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.44s 2025-05-06 00:59:55.194768 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.07s 2025-05-06 00:59:55.194788 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.82s 2025-05-06 00:59:55.194799 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.81s 2025-05-06 00:59:55.194809 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.78s 2025-05-06 00:59:55.194819 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.65s 2025-05-06 00:59:55.194829 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.63s 2025-05-06 00:59:55.194839 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.55s 2025-05-06 00:59:55.194855 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.52s 2025-05-06 00:59:55.194865 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.49s 2025-05-06 00:59:55.194875 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.48s 2025-05-06 00:59:55.194885 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.47s 2025-05-06 00:59:55.194895 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.45s 2025-05-06 00:59:55.194905 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.45s 2025-05-06 00:59:55.194915 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.41s 2025-05-06 00:59:55.194925 | orchestrator | ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli --- 0.37s 2025-05-06 00:59:55.194935 | orchestrator | ceph-facts : set osd_pool_default_crush_rule fact ----------------------- 0.33s 2025-05-06 00:59:55.194945 | orchestrator | 2025-05-06 00:59:55 | INFO  | Task ebb77e1b-d0e4-4cd0-90c7-2bedff691a9f is in state SUCCESS 2025-05-06 00:59:55.194956 | orchestrator | 2025-05-06 00:59:55 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 00:59:55.194970 | orchestrator | 2025-05-06 00:59:55 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 00:59:55.204205 | orchestrator | 2025-05-06 00:59:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:59:55.205335 | orchestrator | 2025-05-06 00:59:55 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 00:59:55.209810 | orchestrator | 2025-05-06 00:59:55 | INFO  | Task 2484f385-c3d9-4778-8195-096b86868c7b is in state STARTED 2025-05-06 00:59:55.211294 | orchestrator | 2025-05-06 00:59:55 | INFO  | Task 0d08879b-967a-4e2e-9702-70ef49a55b1b is in state SUCCESS 2025-05-06 00:59:55.213999 | orchestrator | 2025-05-06 00:59:55.214099 | orchestrator | 2025-05-06 00:59:55.214114 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 00:59:55.214126 | orchestrator | 2025-05-06 00:59:55.214138 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 00:59:55.214149 | orchestrator | Tuesday 06 May 2025 00:57:23 +0000 (0:00:00.301) 0:00:00.301 *********** 2025-05-06 00:59:55.214160 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:55.214173 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:59:55.214183 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:59:55.214195 | orchestrator | 2025-05-06 00:59:55.214206 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 00:59:55.214217 | orchestrator | Tuesday 06 May 2025 00:57:24 +0000 (0:00:00.446) 0:00:00.748 *********** 2025-05-06 00:59:55.214228 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-06 00:59:55.214240 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-06 00:59:55.214251 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-06 00:59:55.214262 | orchestrator | 2025-05-06 00:59:55.214273 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-06 00:59:55.214284 | orchestrator | 2025-05-06 00:59:55.214295 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-06 00:59:55.214306 | orchestrator | Tuesday 06 May 2025 00:57:24 +0000 (0:00:00.352) 0:00:01.100 *********** 2025-05-06 00:59:55.214317 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:59:55.214333 | orchestrator | 2025-05-06 00:59:55.214350 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-06 00:59:55.214362 | orchestrator | Tuesday 06 May 2025 00:57:25 +0000 (0:00:00.758) 0:00:01.859 *********** 2025-05-06 00:59:55.214376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-06 00:59:55.214393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-06 00:59:55.214479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-06 00:59:55.214497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-06 00:59:55.214511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-06 00:59:55.214522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-06 00:59:55.214534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-06 00:59:55.214546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-06 00:59:55.214565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-06 00:59:55.214576 | orchestrator | 2025-05-06 00:59:55.214591 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-06 00:59:55.214618 | orchestrator | Tuesday 06 May 2025 00:57:27 +0000 (0:00:02.163) 0:00:04.022 *********** 2025-05-06 00:59:55.214632 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-06 00:59:55.214645 | orchestrator | 2025-05-06 00:59:55.214658 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-06 00:59:55.214670 | orchestrator | Tuesday 06 May 2025 00:57:28 +0000 (0:00:00.512) 0:00:04.535 *********** 2025-05-06 00:59:55.214683 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:55.214696 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:59:55.214709 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:59:55.214722 | orchestrator | 2025-05-06 00:59:55.214734 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-06 00:59:55.214747 | orchestrator | Tuesday 06 May 2025 00:57:28 +0000 (0:00:00.409) 0:00:04.945 *********** 2025-05-06 00:59:55.214759 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-06 00:59:55.214772 | orchestrator | 2025-05-06 00:59:55.214785 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-06 00:59:55.214798 | orchestrator | Tuesday 06 May 2025 00:57:28 +0000 (0:00:00.447) 0:00:05.392 *********** 2025-05-06 00:59:55.214811 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:59:55.214825 | orchestrator | 2025-05-06 00:59:55.214838 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-06 00:59:55.214851 | orchestrator | Tuesday 06 May 2025 00:57:29 +0000 (0:00:00.626) 0:00:06.018 *********** 2025-05-06 00:59:55.214864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-06 00:59:55.214884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-06 00:59:55.214905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-06 00:59:55.214919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-06 00:59:55.214933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-06 00:59:55.214947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-06 00:59:55.214965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-06 00:59:55.214977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-06 00:59:55.214988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-06 00:59:55.214999 | orchestrator | 2025-05-06 00:59:55.215011 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-06 00:59:55.215022 | orchestrator | Tuesday 06 May 2025 00:57:33 +0000 (0:00:03.655) 0:00:09.674 *********** 2025-05-06 00:59:55.215047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-06 00:59:55.215060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-06 00:59:55.215072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-06 00:59:55.215089 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.215101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-06 00:59:55.215114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-06 00:59:55.215132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-06 00:59:55.215148 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:55.215160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-06 00:59:55.215178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-06 00:59:55.215189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-06 00:59:55.215201 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:55.215212 | orchestrator | 2025-05-06 00:59:55.215223 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-06 00:59:55.215234 | orchestrator | Tuesday 06 May 2025 00:57:34 +0000 (0:00:01.048) 0:00:10.723 *********** 2025-05-06 00:59:55.215246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-06 00:59:55.215264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-06 00:59:55.215276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-06 00:59:55.215287 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.215304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-06 00:59:55.215317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-06 00:59:55.215329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-06 00:59:55.215346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-06 00:59:55.215358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-06 00:59:55.215374 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:55.215386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-06 00:59:55.215398 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:55.215409 | orchestrator | 2025-05-06 00:59:55.215420 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-06 00:59:55.215431 | orchestrator | Tuesday 06 May 2025 00:57:35 +0000 (0:00:01.140) 0:00:11.863 *********** 2025-05-06 00:59:55.215462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-06 00:59:55.215476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-06 00:59:55.215495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-06 00:59:55.215513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-06 00:59:55.215525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-06 00:59:55.215537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-06 00:59:55.215548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-06 00:59:55.215560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-06 00:59:55.215577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-06 00:59:55.215588 | orchestrator | 2025-05-06 00:59:55.215599 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-06 00:59:55.215616 | orchestrator | Tuesday 06 May 2025 00:57:38 +0000 (0:00:02.742) 0:00:14.605 *********** 2025-05-06 00:59:55.215628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-06 00:59:55.215640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-06 00:59:55.215652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-06 00:59:55.215664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-06 00:59:55.215681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-06 00:59:55.215702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-06 00:59:55.215714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-06 00:59:55.215726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-06 00:59:55.215737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-06 00:59:55.215748 | orchestrator | 2025-05-06 00:59:55.215760 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-06 00:59:55.215771 | orchestrator | Tuesday 06 May 2025 00:57:44 +0000 (0:00:06.794) 0:00:21.400 *********** 2025-05-06 00:59:55.215782 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:59:55.215793 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:59:55.215804 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:59:55.215815 | orchestrator | 2025-05-06 00:59:55.215826 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-06 00:59:55.215837 | orchestrator | Tuesday 06 May 2025 00:57:47 +0000 (0:00:02.256) 0:00:23.657 *********** 2025-05-06 00:59:55.215848 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.215859 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:55.215870 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:55.215887 | orchestrator | 2025-05-06 00:59:55.215902 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-06 00:59:55.215913 | orchestrator | Tuesday 06 May 2025 00:57:48 +0000 (0:00:01.075) 0:00:24.732 *********** 2025-05-06 00:59:55.215924 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.215935 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:55.215946 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:55.215957 | orchestrator | 2025-05-06 00:59:55.215968 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-06 00:59:55.215979 | orchestrator | Tuesday 06 May 2025 00:57:48 +0000 (0:00:00.507) 0:00:25.240 *********** 2025-05-06 00:59:55.215990 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.216001 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:55.216012 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:55.216028 | orchestrator | 2025-05-06 00:59:55.216039 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-06 00:59:55.216051 | orchestrator | Tuesday 06 May 2025 00:57:49 +0000 (0:00:00.538) 0:00:25.778 *********** 2025-05-06 00:59:55.216063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-06 00:59:55.216075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-06 00:59:55.216087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-06 00:59:55.216099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-06 00:59:55.216122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-06 00:59:55.216135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-06 00:59:55.216147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-06 00:59:55.216158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-06 00:59:55.216170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-06 00:59:55.216188 | orchestrator | 2025-05-06 00:59:55.216200 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-06 00:59:55.216211 | orchestrator | Tuesday 06 May 2025 00:57:51 +0000 (0:00:02.571) 0:00:28.350 *********** 2025-05-06 00:59:55.216222 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.216233 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:55.216244 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:55.216255 | orchestrator | 2025-05-06 00:59:55.216266 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-06 00:59:55.216277 | orchestrator | Tuesday 06 May 2025 00:57:52 +0000 (0:00:00.306) 0:00:28.656 *********** 2025-05-06 00:59:55.216288 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-06 00:59:55.216299 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-06 00:59:55.216314 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-06 00:59:55.216326 | orchestrator | 2025-05-06 00:59:55.216337 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-06 00:59:55.216348 | orchestrator | Tuesday 06 May 2025 00:57:54 +0000 (0:00:02.101) 0:00:30.758 *********** 2025-05-06 00:59:55.216359 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-06 00:59:55.216370 | orchestrator | 2025-05-06 00:59:55.216381 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-06 00:59:55.216392 | orchestrator | Tuesday 06 May 2025 00:57:54 +0000 (0:00:00.635) 0:00:31.393 *********** 2025-05-06 00:59:55.216403 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.216413 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:55.216424 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:55.216435 | orchestrator | 2025-05-06 00:59:55.216506 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-06 00:59:55.216519 | orchestrator | Tuesday 06 May 2025 00:57:56 +0000 (0:00:01.473) 0:00:32.867 *********** 2025-05-06 00:59:55.216531 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-06 00:59:55.216542 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-06 00:59:55.216622 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-06 00:59:55.216638 | orchestrator | 2025-05-06 00:59:55.216649 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-06 00:59:55.216660 | orchestrator | Tuesday 06 May 2025 00:57:57 +0000 (0:00:01.284) 0:00:34.151 *********** 2025-05-06 00:59:55.216671 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:55.216682 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:59:55.216693 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:59:55.216704 | orchestrator | 2025-05-06 00:59:55.216716 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-06 00:59:55.216726 | orchestrator | Tuesday 06 May 2025 00:57:58 +0000 (0:00:00.346) 0:00:34.498 *********** 2025-05-06 00:59:55.216737 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-06 00:59:55.216748 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-06 00:59:55.216759 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-06 00:59:55.216770 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-06 00:59:55.216781 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-06 00:59:55.216792 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-06 00:59:55.216804 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-06 00:59:55.216815 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-06 00:59:55.216834 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-06 00:59:55.216845 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-06 00:59:55.216856 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-06 00:59:55.216867 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-06 00:59:55.216878 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-06 00:59:55.216889 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-06 00:59:55.216900 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-06 00:59:55.216916 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-06 00:59:55.216928 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-06 00:59:55.216939 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-06 00:59:55.216950 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-06 00:59:55.216961 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-06 00:59:55.216972 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-06 00:59:55.216982 | orchestrator | 2025-05-06 00:59:55.216993 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-06 00:59:55.217004 | orchestrator | Tuesday 06 May 2025 00:58:08 +0000 (0:00:10.262) 0:00:44.761 *********** 2025-05-06 00:59:55.217015 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-06 00:59:55.217026 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-06 00:59:55.217036 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-06 00:59:55.217045 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-06 00:59:55.217055 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-06 00:59:55.217071 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-06 00:59:55.217082 | orchestrator | 2025-05-06 00:59:55.217092 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-06 00:59:55.217102 | orchestrator | Tuesday 06 May 2025 00:58:11 +0000 (0:00:03.414) 0:00:48.175 *********** 2025-05-06 00:59:55.217113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-06 00:59:55.217125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-06 00:59:55.217141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-06 00:59:55.217152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-06 00:59:55.217170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-06 00:59:55.217181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-06 00:59:55.217191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-06 00:59:55.217208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-06 00:59:55.217218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-06 00:59:55.217228 | orchestrator | 2025-05-06 00:59:55.217238 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-06 00:59:55.217248 | orchestrator | Tuesday 06 May 2025 00:58:14 +0000 (0:00:02.990) 0:00:51.166 *********** 2025-05-06 00:59:55.217258 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.217268 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:55.217278 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:55.217288 | orchestrator | 2025-05-06 00:59:55.217299 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-06 00:59:55.217309 | orchestrator | Tuesday 06 May 2025 00:58:14 +0000 (0:00:00.293) 0:00:51.459 *********** 2025-05-06 00:59:55.217319 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:59:55.217329 | orchestrator | 2025-05-06 00:59:55.217339 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-05-06 00:59:55.217348 | orchestrator | Tuesday 06 May 2025 00:58:17 +0000 (0:00:02.401) 0:00:53.860 *********** 2025-05-06 00:59:55.217358 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:59:55.217368 | orchestrator | 2025-05-06 00:59:55.217378 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-05-06 00:59:55.217388 | orchestrator | Tuesday 06 May 2025 00:58:19 +0000 (0:00:02.348) 0:00:56.208 *********** 2025-05-06 00:59:55.217398 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:55.217408 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:59:55.217418 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:59:55.217428 | orchestrator | 2025-05-06 00:59:55.217438 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-05-06 00:59:55.217470 | orchestrator | Tuesday 06 May 2025 00:58:20 +0000 (0:00:00.984) 0:00:57.193 *********** 2025-05-06 00:59:55.217487 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:55.217510 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:59:55.217527 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:59:55.217537 | orchestrator | 2025-05-06 00:59:55.217548 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-05-06 00:59:55.217558 | orchestrator | Tuesday 06 May 2025 00:58:21 +0000 (0:00:00.366) 0:00:57.560 *********** 2025-05-06 00:59:55.217574 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.217584 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:55.217594 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:55.217604 | orchestrator | 2025-05-06 00:59:55.217614 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-05-06 00:59:55.217624 | orchestrator | Tuesday 06 May 2025 00:58:21 +0000 (0:00:00.644) 0:00:58.204 *********** 2025-05-06 00:59:55.217634 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:59:55.217644 | orchestrator | 2025-05-06 00:59:55.217654 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-05-06 00:59:55.217664 | orchestrator | Tuesday 06 May 2025 00:58:35 +0000 (0:00:14.038) 0:01:12.243 *********** 2025-05-06 00:59:55.217673 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:59:55.217684 | orchestrator | 2025-05-06 00:59:55.217693 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-06 00:59:55.217703 | orchestrator | Tuesday 06 May 2025 00:58:45 +0000 (0:00:09.399) 0:01:21.643 *********** 2025-05-06 00:59:55.217713 | orchestrator | 2025-05-06 00:59:55.217723 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-06 00:59:55.217738 | orchestrator | Tuesday 06 May 2025 00:58:45 +0000 (0:00:00.060) 0:01:21.703 *********** 2025-05-06 00:59:55.217749 | orchestrator | 2025-05-06 00:59:55.217759 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-06 00:59:55.217769 | orchestrator | Tuesday 06 May 2025 00:58:45 +0000 (0:00:00.055) 0:01:21.759 *********** 2025-05-06 00:59:55.217779 | orchestrator | 2025-05-06 00:59:55.217788 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-05-06 00:59:55.217798 | orchestrator | Tuesday 06 May 2025 00:58:45 +0000 (0:00:00.055) 0:01:21.814 *********** 2025-05-06 00:59:55.217808 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:59:55.217818 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:59:55.217828 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:59:55.217838 | orchestrator | 2025-05-06 00:59:55.217848 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-05-06 00:59:55.217858 | orchestrator | Tuesday 06 May 2025 00:58:59 +0000 (0:00:14.557) 0:01:36.371 *********** 2025-05-06 00:59:55.217868 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:59:55.217878 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:59:55.217888 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:59:55.217898 | orchestrator | 2025-05-06 00:59:55.217908 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-05-06 00:59:55.217918 | orchestrator | Tuesday 06 May 2025 00:59:04 +0000 (0:00:05.045) 0:01:41.417 *********** 2025-05-06 00:59:55.217928 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:59:55.217938 | orchestrator | changed: [testbed-node-1] 2025-05-06 00:59:55.217948 | orchestrator | changed: [testbed-node-2] 2025-05-06 00:59:55.217958 | orchestrator | 2025-05-06 00:59:55.217968 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-06 00:59:55.217978 | orchestrator | Tuesday 06 May 2025 00:59:10 +0000 (0:00:05.381) 0:01:46.798 *********** 2025-05-06 00:59:55.217989 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 00:59:55.217999 | orchestrator | 2025-05-06 00:59:55.218009 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-05-06 00:59:55.218056 | orchestrator | Tuesday 06 May 2025 00:59:11 +0000 (0:00:00.863) 0:01:47.662 *********** 2025-05-06 00:59:55.218069 | orchestrator | ok: [testbed-node-1] 2025-05-06 00:59:55.218079 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:55.218090 | orchestrator | ok: [testbed-node-2] 2025-05-06 00:59:55.218100 | orchestrator | 2025-05-06 00:59:55.218110 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-05-06 00:59:55.218120 | orchestrator | Tuesday 06 May 2025 00:59:12 +0000 (0:00:00.975) 0:01:48.637 *********** 2025-05-06 00:59:55.218130 | orchestrator | changed: [testbed-node-0] 2025-05-06 00:59:55.218146 | orchestrator | 2025-05-06 00:59:55.218156 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-05-06 00:59:55.218166 | orchestrator | Tuesday 06 May 2025 00:59:13 +0000 (0:00:01.451) 0:01:50.089 *********** 2025-05-06 00:59:55.218176 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-05-06 00:59:55.218186 | orchestrator | 2025-05-06 00:59:55.218196 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-05-06 00:59:55.218206 | orchestrator | Tuesday 06 May 2025 00:59:22 +0000 (0:00:09.363) 0:01:59.453 *********** 2025-05-06 00:59:55.218216 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-05-06 00:59:55.218227 | orchestrator | 2025-05-06 00:59:55.218237 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-05-06 00:59:55.218247 | orchestrator | Tuesday 06 May 2025 00:59:41 +0000 (0:00:18.560) 0:02:18.013 *********** 2025-05-06 00:59:55.218257 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-05-06 00:59:55.218267 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-05-06 00:59:55.218277 | orchestrator | 2025-05-06 00:59:55.218287 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-05-06 00:59:55.218297 | orchestrator | Tuesday 06 May 2025 00:59:48 +0000 (0:00:06.820) 0:02:24.833 *********** 2025-05-06 00:59:55.218307 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.218317 | orchestrator | 2025-05-06 00:59:55.218327 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-05-06 00:59:55.218340 | orchestrator | Tuesday 06 May 2025 00:59:48 +0000 (0:00:00.124) 0:02:24.957 *********** 2025-05-06 00:59:55.218355 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:55.218365 | orchestrator | 2025-05-06 00:59:55.218375 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-05-06 00:59:55.218390 | orchestrator | Tuesday 06 May 2025 00:59:48 +0000 (0:00:00.109) 0:02:25.067 *********** 2025-05-06 00:59:58.245850 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:58.246112 | orchestrator | 2025-05-06 00:59:58.246139 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-05-06 00:59:58.246154 | orchestrator | Tuesday 06 May 2025 00:59:48 +0000 (0:00:00.121) 0:02:25.189 *********** 2025-05-06 00:59:58.246168 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:58.246182 | orchestrator | 2025-05-06 00:59:58.246196 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-05-06 00:59:58.246210 | orchestrator | Tuesday 06 May 2025 00:59:49 +0000 (0:00:00.374) 0:02:25.564 *********** 2025-05-06 00:59:58.246224 | orchestrator | ok: [testbed-node-0] 2025-05-06 00:59:58.246238 | orchestrator | 2025-05-06 00:59:58.246270 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-06 00:59:58.246285 | orchestrator | Tuesday 06 May 2025 00:59:52 +0000 (0:00:03.732) 0:02:29.297 *********** 2025-05-06 00:59:58.246299 | orchestrator | skipping: [testbed-node-0] 2025-05-06 00:59:58.246313 | orchestrator | skipping: [testbed-node-1] 2025-05-06 00:59:58.246327 | orchestrator | skipping: [testbed-node-2] 2025-05-06 00:59:58.246340 | orchestrator | 2025-05-06 00:59:58.246354 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 00:59:58.246369 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-06 00:59:58.246384 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-06 00:59:58.246398 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-06 00:59:58.246412 | orchestrator | 2025-05-06 00:59:58.246425 | orchestrator | 2025-05-06 00:59:58.246554 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 00:59:58.246599 | orchestrator | Tuesday 06 May 2025 00:59:53 +0000 (0:00:00.526) 0:02:29.823 *********** 2025-05-06 00:59:58.246613 | orchestrator | =============================================================================== 2025-05-06 00:59:58.246627 | orchestrator | service-ks-register : keystone | Creating services --------------------- 18.56s 2025-05-06 00:59:58.246641 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 14.56s 2025-05-06 00:59:58.246655 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.04s 2025-05-06 00:59:58.246669 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 10.26s 2025-05-06 00:59:58.246682 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.40s 2025-05-06 00:59:58.246696 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.36s 2025-05-06 00:59:58.246710 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.82s 2025-05-06 00:59:58.246723 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.79s 2025-05-06 00:59:58.246737 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.38s 2025-05-06 00:59:58.246751 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.05s 2025-05-06 00:59:58.246764 | orchestrator | keystone : Creating default user role ----------------------------------- 3.73s 2025-05-06 00:59:58.246778 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.66s 2025-05-06 00:59:58.246792 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.41s 2025-05-06 00:59:58.246805 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.99s 2025-05-06 00:59:58.246819 | orchestrator | keystone : Copying over config.json files for services ------------------ 2.74s 2025-05-06 00:59:58.246833 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.57s 2025-05-06 00:59:58.246847 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.40s 2025-05-06 00:59:58.246861 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.35s 2025-05-06 00:59:58.246874 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.26s 2025-05-06 00:59:58.246888 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.16s 2025-05-06 00:59:58.246902 | orchestrator | 2025-05-06 00:59:55 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 00:59:58.246917 | orchestrator | 2025-05-06 00:59:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 00:59:58.246948 | orchestrator | 2025-05-06 00:59:58 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 00:59:58.250800 | orchestrator | 2025-05-06 00:59:58 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 00:59:58.250841 | orchestrator | 2025-05-06 00:59:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 00:59:58.250874 | orchestrator | 2025-05-06 00:59:58 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 00:59:58.251254 | orchestrator | 2025-05-06 00:59:58 | INFO  | Task 2484f385-c3d9-4778-8195-096b86868c7b is in state SUCCESS 2025-05-06 00:59:58.252196 | orchestrator | 2025-05-06 00:59:58 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 00:59:58.252563 | orchestrator | 2025-05-06 00:59:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:00:01.296591 | orchestrator | 2025-05-06 01:00:01 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:00:01.297403 | orchestrator | 2025-05-06 01:00:01 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:00:01.297472 | orchestrator | 2025-05-06 01:00:01 | INFO  | Task b98ef603-fbdc-42c5-a213-ddb7fdb7e48c is in state STARTED 2025-05-06 01:00:01.297527 | orchestrator | 2025-05-06 01:00:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:00:01.298956 | orchestrator | 2025-05-06 01:00:01 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:00:01.300168 | orchestrator | 2025-05-06 01:00:01 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:00:01.300420 | orchestrator | 2025-05-06 01:00:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:00:04.334950 | orchestrator | 2025-05-06 01:00:04 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:00:04.335683 | orchestrator | 2025-05-06 01:00:04 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:00:04.337346 | orchestrator | 2025-05-06 01:00:04 | INFO  | Task b98ef603-fbdc-42c5-a213-ddb7fdb7e48c is in state STARTED 2025-05-06 01:00:04.339559 | orchestrator | 2025-05-06 01:00:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:00:04.340402 | orchestrator | 2025-05-06 01:00:04 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:00:04.341954 | orchestrator | 2025-05-06 01:00:04 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:00:07.391976 | orchestrator | 2025-05-06 01:00:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:00:07.392117 | orchestrator | 2025-05-06 01:00:07 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:00:07.393053 | orchestrator | 2025-05-06 01:00:07 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:00:07.394098 | orchestrator | 2025-05-06 01:00:07 | INFO  | Task b98ef603-fbdc-42c5-a213-ddb7fdb7e48c is in state STARTED 2025-05-06 01:00:07.394812 | orchestrator | 2025-05-06 01:00:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:00:07.395577 | orchestrator | 2025-05-06 01:00:07 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:00:07.396353 | orchestrator | 2025-05-06 01:00:07 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:00:07.396493 | orchestrator | 2025-05-06 01:00:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:00:10.447647 | orchestrator | 2025-05-06 01:00:10 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:00:10.450203 | orchestrator | 2025-05-06 01:00:10 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:00:10.451296 | orchestrator | 2025-05-06 01:00:10 | INFO  | Task b98ef603-fbdc-42c5-a213-ddb7fdb7e48c is in state STARTED 2025-05-06 01:00:10.453688 | orchestrator | 2025-05-06 01:00:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:00:10.455798 | orchestrator | 2025-05-06 01:00:10 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:00:10.457574 | orchestrator | 2025-05-06 01:00:10 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:00:13.516064 | orchestrator | 2025-05-06 01:00:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:00:13.516211 | orchestrator | 2025-05-06 01:00:13 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:00:13.516578 | orchestrator | 2025-05-06 01:00:13 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:00:13.518072 | orchestrator | 2025-05-06 01:00:13 | INFO  | Task b98ef603-fbdc-42c5-a213-ddb7fdb7e48c is in state STARTED 2025-05-06 01:00:13.519283 | orchestrator | 2025-05-06 01:00:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:00:13.520793 | orchestrator | 2025-05-06 01:00:13 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:00:13.521992 | orchestrator | 2025-05-06 01:00:13 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:00:16.563550 | orchestrator | 2025-05-06 01:00:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:00:16.563699 | orchestrator | 2025-05-06 01:00:16 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:00:16.564663 | orchestrator | 2025-05-06 01:00:16 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:00:16.566011 | orchestrator | 2025-05-06 01:00:16 | INFO  | Task b98ef603-fbdc-42c5-a213-ddb7fdb7e48c is in state STARTED 2025-05-06 01:00:16.566862 | orchestrator | 2025-05-06 01:00:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:00:16.567555 | orchestrator | 2025-05-06 01:00:16 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:00:16.568839 | orchestrator | 2025-05-06 01:00:16 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:00:19.625913 | orchestrator | 2025-05-06 01:00:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:00:19.626162 | orchestrator | 2025-05-06 01:00:19 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:00:19.627290 | orchestrator | 2025-05-06 01:00:19 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:00:19.629193 | orchestrator | 2025-05-06 01:00:19 | INFO  | Task b98ef603-fbdc-42c5-a213-ddb7fdb7e48c is in state STARTED 2025-05-06 01:00:19.630699 | orchestrator | 2025-05-06 01:00:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:00:19.631997 | orchestrator | 2025-05-06 01:00:19 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:00:19.634533 | orchestrator | 2025-05-06 01:00:19 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:00:22.680707 | orchestrator | 2025-05-06 01:00:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:00:22.680842 | orchestrator | 2025-05-06 01:00:22 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:00:22.683087 | orchestrator | 2025-05-06 01:00:22 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:00:22.684049 | orchestrator | 2025-05-06 01:00:22 | INFO  | Task b98ef603-fbdc-42c5-a213-ddb7fdb7e48c is in state STARTED 2025-05-06 01:00:22.685685 | orchestrator | 2025-05-06 01:00:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:00:22.687222 | orchestrator | 2025-05-06 01:00:22 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:00:22.688742 | orchestrator | 2025-05-06 01:00:22 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:00:25.733834 | orchestrator | 2025-05-06 01:00:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:00:25.733983 | orchestrator | 2025-05-06 01:00:25 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:00:25.734932 | orchestrator | 2025-05-06 01:00:25 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:00:25.734962 | orchestrator | 2025-05-06 01:00:25 | INFO  | Task b98ef603-fbdc-42c5-a213-ddb7fdb7e48c is in state STARTED 2025-05-06 01:00:25.735015 | orchestrator | 2025-05-06 01:00:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:00:25.738574 | orchestrator | 2025-05-06 01:00:25 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:00:28.791941 | orchestrator | 2025-05-06 01:00:25 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:00:28.792051 | orchestrator | 2025-05-06 01:00:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:00:28.792082 | orchestrator | 2025-05-06 01:00:28 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:00:28.792964 | orchestrator | 2025-05-06 01:00:28 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:00:28.793017 | orchestrator | 2025-05-06 01:00:28 | INFO  | Task b98ef603-fbdc-42c5-a213-ddb7fdb7e48c is in state STARTED 2025-05-06 01:00:28.794507 | orchestrator | 2025-05-06 01:00:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:00:28.795973 | orchestrator | 2025-05-06 01:00:28 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:00:28.797327 | orchestrator | 2025-05-06 01:00:28 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:00:31.851454 | orchestrator | 2025-05-06 01:00:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:00:31.851590 | orchestrator | 2025-05-06 01:00:31 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:00:31.853426 | orchestrator | 2025-05-06 01:00:31 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:00:31.854994 | orchestrator | 2025-05-06 01:00:31 | INFO  | Task b98ef603-fbdc-42c5-a213-ddb7fdb7e48c is in state STARTED 2025-05-06 01:00:31.856900 | orchestrator | 2025-05-06 01:00:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:00:31.858136 | orchestrator | 2025-05-06 01:00:31 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:00:31.859604 | orchestrator | 2025-05-06 01:00:31 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:00:34.907106 | orchestrator | 2025-05-06 01:00:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:00:34.907225 | orchestrator | 2025-05-06 01:00:34 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:00:34.909740 | orchestrator | 2025-05-06 01:00:34 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:00:34.910526 | orchestrator | 2025-05-06 01:00:34 | INFO  | Task b98ef603-fbdc-42c5-a213-ddb7fdb7e48c is in state STARTED 2025-05-06 01:00:34.912505 | orchestrator | 2025-05-06 01:00:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:00:34.913114 | orchestrator | 2025-05-06 01:00:34 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:00:34.914003 | orchestrator | 2025-05-06 01:00:34 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:00:34.914174 | orchestrator | 2025-05-06 01:00:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:00:37.962556 | orchestrator | 2025-05-06 01:00:37 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:00:37.963026 | orchestrator | 2025-05-06 01:00:37 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:00:37.963961 | orchestrator | 2025-05-06 01:00:37 | INFO  | Task b98ef603-fbdc-42c5-a213-ddb7fdb7e48c is in state STARTED 2025-05-06 01:00:37.964935 | orchestrator | 2025-05-06 01:00:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:00:37.965761 | orchestrator | 2025-05-06 01:00:37 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:00:37.966641 | orchestrator | 2025-05-06 01:00:37 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:00:37.967704 | orchestrator | 2025-05-06 01:00:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:00:41.053470 | orchestrator | 2025-05-06 01:00:41 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:00:41.056645 | orchestrator | 2025-05-06 01:00:41 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:00:41.059340 | orchestrator | 2025-05-06 01:00:41 | INFO  | Task b98ef603-fbdc-42c5-a213-ddb7fdb7e48c is in state STARTED 2025-05-06 01:00:41.059379 | orchestrator | 2025-05-06 01:00:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:00:41.059419 | orchestrator | 2025-05-06 01:00:41 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:00:44.089828 | orchestrator | 2025-05-06 01:00:41 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:00:44.089939 | orchestrator | 2025-05-06 01:00:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:00:44.089975 | orchestrator | 2025-05-06 01:00:44 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:00:44.092820 | orchestrator | 2025-05-06 01:00:44 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:00:44.093352 | orchestrator | 2025-05-06 01:00:44 | INFO  | Task b98ef603-fbdc-42c5-a213-ddb7fdb7e48c is in state STARTED 2025-05-06 01:00:44.093380 | orchestrator | 2025-05-06 01:00:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:00:44.093462 | orchestrator | 2025-05-06 01:00:44 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:00:44.093493 | orchestrator | 2025-05-06 01:00:44 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:00:47.127085 | orchestrator | 2025-05-06 01:00:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:00:47.127305 | orchestrator | 2025-05-06 01:00:47 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:00:47.127810 | orchestrator | 2025-05-06 01:00:47 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:00:47.127849 | orchestrator | 2025-05-06 01:00:47 | INFO  | Task b98ef603-fbdc-42c5-a213-ddb7fdb7e48c is in state STARTED 2025-05-06 01:00:47.128305 | orchestrator | 2025-05-06 01:00:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:00:47.128844 | orchestrator | 2025-05-06 01:00:47 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:00:47.129401 | orchestrator | 2025-05-06 01:00:47 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:00:50.164207 | orchestrator | 2025-05-06 01:00:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:00:50.164331 | orchestrator | 2025-05-06 01:00:50 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:00:50.164817 | orchestrator | 2025-05-06 01:00:50 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:00:50.165368 | orchestrator | 2025-05-06 01:00:50 | INFO  | Task b98ef603-fbdc-42c5-a213-ddb7fdb7e48c is in state STARTED 2025-05-06 01:00:50.166252 | orchestrator | 2025-05-06 01:00:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:00:50.167963 | orchestrator | 2025-05-06 01:00:50 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:00:50.168473 | orchestrator | 2025-05-06 01:00:50 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:00:53.195511 | orchestrator | 2025-05-06 01:00:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:00:53.195737 | orchestrator | 2025-05-06 01:00:53 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:00:53.196203 | orchestrator | 2025-05-06 01:00:53 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:00:53.196232 | orchestrator | 2025-05-06 01:00:53 | INFO  | Task b98ef603-fbdc-42c5-a213-ddb7fdb7e48c is in state STARTED 2025-05-06 01:00:53.196255 | orchestrator | 2025-05-06 01:00:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:00:53.196840 | orchestrator | 2025-05-06 01:00:53 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:00:53.197459 | orchestrator | 2025-05-06 01:00:53 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:00:56.242740 | orchestrator | 2025-05-06 01:00:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:00:56.242872 | orchestrator | 2025-05-06 01:00:56 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:00:56.243535 | orchestrator | 2025-05-06 01:00:56 | INFO  | Task c0b96b6c-2984-48de-9100-21c6a5117a52 is in state STARTED 2025-05-06 01:00:56.244513 | orchestrator | 2025-05-06 01:00:56 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:00:56.245398 | orchestrator | 2025-05-06 01:00:56 | INFO  | Task b98ef603-fbdc-42c5-a213-ddb7fdb7e48c is in state SUCCESS 2025-05-06 01:00:56.245780 | orchestrator | 2025-05-06 01:00:56.245810 | orchestrator | 2025-05-06 01:00:56.245825 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-06 01:00:56.245841 | orchestrator | 2025-05-06 01:00:56.245855 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-05-06 01:00:56.245869 | orchestrator | Tuesday 06 May 2025 00:59:18 +0000 (0:00:00.138) 0:00:00.138 *********** 2025-05-06 01:00:56.245883 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-06 01:00:56.245897 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-06 01:00:56.245911 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-06 01:00:56.245940 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-06 01:00:56.245955 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-06 01:00:56.245969 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-06 01:00:56.245983 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-06 01:00:56.245996 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-06 01:00:56.246010 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-06 01:00:56.246073 | orchestrator | 2025-05-06 01:00:56.246088 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-05-06 01:00:56.246101 | orchestrator | Tuesday 06 May 2025 00:59:21 +0000 (0:00:02.870) 0:00:03.009 *********** 2025-05-06 01:00:56.246115 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-06 01:00:56.246129 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-06 01:00:56.246234 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-06 01:00:56.246276 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-06 01:00:56.246291 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-06 01:00:56.246304 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-06 01:00:56.246318 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-06 01:00:56.246332 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-06 01:00:56.246346 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-06 01:00:56.246360 | orchestrator | 2025-05-06 01:00:56.246432 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-05-06 01:00:56.246450 | orchestrator | Tuesday 06 May 2025 00:59:21 +0000 (0:00:00.220) 0:00:03.229 *********** 2025-05-06 01:00:56.246464 | orchestrator | ok: [testbed-manager] => { 2025-05-06 01:00:56.246489 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-05-06 01:00:56.246505 | orchestrator | } 2025-05-06 01:00:56.246519 | orchestrator | 2025-05-06 01:00:56.246533 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-05-06 01:00:56.246547 | orchestrator | Tuesday 06 May 2025 00:59:21 +0000 (0:00:00.163) 0:00:03.393 *********** 2025-05-06 01:00:56.246561 | orchestrator | changed: [testbed-manager] 2025-05-06 01:00:56.246575 | orchestrator | 2025-05-06 01:00:56.246589 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-05-06 01:00:56.246603 | orchestrator | Tuesday 06 May 2025 00:59:54 +0000 (0:00:33.071) 0:00:36.464 *********** 2025-05-06 01:00:56.246617 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-05-06 01:00:56.246632 | orchestrator | 2025-05-06 01:00:56.246646 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-05-06 01:00:56.246659 | orchestrator | Tuesday 06 May 2025 00:59:55 +0000 (0:00:00.529) 0:00:36.994 *********** 2025-05-06 01:00:56.246674 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-05-06 01:00:56.246689 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-05-06 01:00:56.246703 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-05-06 01:00:56.246717 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-05-06 01:00:56.246731 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-05-06 01:00:56.246846 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-05-06 01:00:56.248070 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-05-06 01:00:56.248102 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-05-06 01:00:56.248128 | orchestrator | 2025-05-06 01:00:56.248143 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-05-06 01:00:56.248160 | orchestrator | Tuesday 06 May 2025 00:59:58 +0000 (0:00:02.514) 0:00:39.509 *********** 2025-05-06 01:00:56.248174 | orchestrator | skipping: [testbed-manager] 2025-05-06 01:00:56.248188 | orchestrator | 2025-05-06 01:00:56.248203 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 01:00:56.248217 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-06 01:00:56.248231 | orchestrator | 2025-05-06 01:00:56.248245 | orchestrator | Tuesday 06 May 2025 00:59:58 +0000 (0:00:00.016) 0:00:39.525 *********** 2025-05-06 01:00:56.248259 | orchestrator | =============================================================================== 2025-05-06 01:00:56.248272 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 33.07s 2025-05-06 01:00:56.248286 | orchestrator | Check ceph keys --------------------------------------------------------- 2.87s 2025-05-06 01:00:56.248320 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 2.51s 2025-05-06 01:00:56.248335 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.53s 2025-05-06 01:00:56.248348 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.22s 2025-05-06 01:00:56.248362 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.16s 2025-05-06 01:00:56.248401 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.02s 2025-05-06 01:00:56.248416 | orchestrator | 2025-05-06 01:00:56.248430 | orchestrator | 2025-05-06 01:00:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:00:56.248444 | orchestrator | 2025-05-06 01:00:56 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:00:56.248464 | orchestrator | 2025-05-06 01:00:56 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:00:59.273175 | orchestrator | 2025-05-06 01:00:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:00:59.273443 | orchestrator | 2025-05-06 01:00:59 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:00:59.274234 | orchestrator | 2025-05-06 01:00:59 | INFO  | Task c0b96b6c-2984-48de-9100-21c6a5117a52 is in state STARTED 2025-05-06 01:00:59.274271 | orchestrator | 2025-05-06 01:00:59 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:00:59.275124 | orchestrator | 2025-05-06 01:00:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:00:59.275812 | orchestrator | 2025-05-06 01:00:59 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:00:59.276600 | orchestrator | 2025-05-06 01:00:59 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:01:02.308518 | orchestrator | 2025-05-06 01:00:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:01:02.308653 | orchestrator | 2025-05-06 01:01:02 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:01:02.310135 | orchestrator | 2025-05-06 01:01:02 | INFO  | Task c0b96b6c-2984-48de-9100-21c6a5117a52 is in state STARTED 2025-05-06 01:01:02.310187 | orchestrator | 2025-05-06 01:01:02 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:01:02.312278 | orchestrator | 2025-05-06 01:01:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:01:02.313332 | orchestrator | 2025-05-06 01:01:02 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:01:02.319117 | orchestrator | 2025-05-06 01:01:02 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:01:05.360502 | orchestrator | 2025-05-06 01:01:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:01:05.360629 | orchestrator | 2025-05-06 01:01:05 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:01:05.361646 | orchestrator | 2025-05-06 01:01:05 | INFO  | Task c0b96b6c-2984-48de-9100-21c6a5117a52 is in state STARTED 2025-05-06 01:01:05.362214 | orchestrator | 2025-05-06 01:01:05 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:01:05.362245 | orchestrator | 2025-05-06 01:01:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:01:05.363023 | orchestrator | 2025-05-06 01:01:05 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:01:05.363726 | orchestrator | 2025-05-06 01:01:05 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:01:08.389688 | orchestrator | 2025-05-06 01:01:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:01:08.389814 | orchestrator | 2025-05-06 01:01:08 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:01:08.389963 | orchestrator | 2025-05-06 01:01:08 | INFO  | Task c0b96b6c-2984-48de-9100-21c6a5117a52 is in state STARTED 2025-05-06 01:01:08.390416 | orchestrator | 2025-05-06 01:01:08 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:01:08.391067 | orchestrator | 2025-05-06 01:01:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:01:08.391476 | orchestrator | 2025-05-06 01:01:08 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:01:08.392024 | orchestrator | 2025-05-06 01:01:08 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:01:11.420282 | orchestrator | 2025-05-06 01:01:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:01:11.420455 | orchestrator | 2025-05-06 01:01:11 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:01:11.421606 | orchestrator | 2025-05-06 01:01:11 | INFO  | Task c0b96b6c-2984-48de-9100-21c6a5117a52 is in state STARTED 2025-05-06 01:01:11.421645 | orchestrator | 2025-05-06 01:01:11 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:01:11.422129 | orchestrator | 2025-05-06 01:01:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:01:11.422706 | orchestrator | 2025-05-06 01:01:11 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:01:11.423291 | orchestrator | 2025-05-06 01:01:11 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:01:11.423413 | orchestrator | 2025-05-06 01:01:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:01:14.453798 | orchestrator | 2025-05-06 01:01:14 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:01:14.454129 | orchestrator | 2025-05-06 01:01:14 | INFO  | Task c0b96b6c-2984-48de-9100-21c6a5117a52 is in state STARTED 2025-05-06 01:01:14.454678 | orchestrator | 2025-05-06 01:01:14 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:01:14.455263 | orchestrator | 2025-05-06 01:01:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:01:14.455983 | orchestrator | 2025-05-06 01:01:14 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:01:14.456849 | orchestrator | 2025-05-06 01:01:14 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:01:17.494961 | orchestrator | 2025-05-06 01:01:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:01:17.495182 | orchestrator | 2025-05-06 01:01:17 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:01:17.495527 | orchestrator | 2025-05-06 01:01:17 | INFO  | Task c0b96b6c-2984-48de-9100-21c6a5117a52 is in state STARTED 2025-05-06 01:01:17.495564 | orchestrator | 2025-05-06 01:01:17 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:01:17.497073 | orchestrator | 2025-05-06 01:01:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:01:17.497394 | orchestrator | 2025-05-06 01:01:17 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:01:17.497845 | orchestrator | 2025-05-06 01:01:17 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:01:20.533150 | orchestrator | 2025-05-06 01:01:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:01:20.533298 | orchestrator | 2025-05-06 01:01:20 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:01:20.533586 | orchestrator | 2025-05-06 01:01:20 | INFO  | Task c0b96b6c-2984-48de-9100-21c6a5117a52 is in state STARTED 2025-05-06 01:01:20.533971 | orchestrator | 2025-05-06 01:01:20 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:01:20.534558 | orchestrator | 2025-05-06 01:01:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:01:20.534971 | orchestrator | 2025-05-06 01:01:20 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:01:20.535528 | orchestrator | 2025-05-06 01:01:20 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:01:23.565717 | orchestrator | 2025-05-06 01:01:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:01:23.565848 | orchestrator | 2025-05-06 01:01:23 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:01:23.566324 | orchestrator | 2025-05-06 01:01:23 | INFO  | Task c0b96b6c-2984-48de-9100-21c6a5117a52 is in state STARTED 2025-05-06 01:01:23.569330 | orchestrator | 2025-05-06 01:01:23 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:01:23.569976 | orchestrator | 2025-05-06 01:01:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:01:23.570781 | orchestrator | 2025-05-06 01:01:23 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:01:23.571462 | orchestrator | 2025-05-06 01:01:23 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:01:23.571646 | orchestrator | 2025-05-06 01:01:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:01:26.604274 | orchestrator | 2025-05-06 01:01:26 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:01:26.604510 | orchestrator | 2025-05-06 01:01:26 | INFO  | Task c0b96b6c-2984-48de-9100-21c6a5117a52 is in state STARTED 2025-05-06 01:01:26.610374 | orchestrator | 2025-05-06 01:01:26 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:01:26.611283 | orchestrator | 2025-05-06 01:01:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:01:26.612124 | orchestrator | 2025-05-06 01:01:26 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:01:26.613036 | orchestrator | 2025-05-06 01:01:26 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:01:26.613177 | orchestrator | 2025-05-06 01:01:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:01:29.652419 | orchestrator | 2025-05-06 01:01:29 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:01:29.652774 | orchestrator | 2025-05-06 01:01:29 | INFO  | Task c0b96b6c-2984-48de-9100-21c6a5117a52 is in state SUCCESS 2025-05-06 01:01:29.652810 | orchestrator | 2025-05-06 01:01:29.652837 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-06 01:01:29.652853 | orchestrator | 2025-05-06 01:01:29.652866 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-06 01:01:29.652880 | orchestrator | Tuesday 06 May 2025 01:00:00 +0000 (0:00:00.123) 0:00:00.123 *********** 2025-05-06 01:01:29.652893 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-06 01:01:29.652924 | orchestrator | 2025-05-06 01:01:29.652939 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-06 01:01:29.652951 | orchestrator | Tuesday 06 May 2025 01:00:01 +0000 (0:00:00.210) 0:00:00.334 *********** 2025-05-06 01:01:29.652965 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-06 01:01:29.652978 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-06 01:01:29.652991 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-06 01:01:29.653003 | orchestrator | 2025-05-06 01:01:29.653016 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-06 01:01:29.653028 | orchestrator | Tuesday 06 May 2025 01:00:02 +0000 (0:00:00.953) 0:00:01.287 *********** 2025-05-06 01:01:29.653040 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-06 01:01:29.653053 | orchestrator | 2025-05-06 01:01:29.653065 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-06 01:01:29.653077 | orchestrator | Tuesday 06 May 2025 01:00:02 +0000 (0:00:00.890) 0:00:02.178 *********** 2025-05-06 01:01:29.653090 | orchestrator | changed: [testbed-manager] 2025-05-06 01:01:29.653108 | orchestrator | 2025-05-06 01:01:29.653121 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-06 01:01:29.653133 | orchestrator | Tuesday 06 May 2025 01:00:03 +0000 (0:00:00.702) 0:00:02.880 *********** 2025-05-06 01:01:29.653146 | orchestrator | changed: [testbed-manager] 2025-05-06 01:01:29.653241 | orchestrator | 2025-05-06 01:01:29.653259 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-06 01:01:29.653272 | orchestrator | Tuesday 06 May 2025 01:00:04 +0000 (0:00:00.843) 0:00:03.723 *********** 2025-05-06 01:01:29.653284 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-06 01:01:29.653297 | orchestrator | ok: [testbed-manager] 2025-05-06 01:01:29.653310 | orchestrator | 2025-05-06 01:01:29.653322 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-06 01:01:29.653357 | orchestrator | Tuesday 06 May 2025 01:00:45 +0000 (0:00:40.817) 0:00:44.541 *********** 2025-05-06 01:01:29.653379 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-06 01:01:29.653398 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-06 01:01:29.653417 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-06 01:01:29.653437 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-06 01:01:29.653456 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-06 01:01:29.653469 | orchestrator | 2025-05-06 01:01:29.653481 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-06 01:01:29.653494 | orchestrator | Tuesday 06 May 2025 01:00:48 +0000 (0:00:03.447) 0:00:47.988 *********** 2025-05-06 01:01:29.653506 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-06 01:01:29.653540 | orchestrator | 2025-05-06 01:01:29.653553 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-06 01:01:29.653565 | orchestrator | Tuesday 06 May 2025 01:00:49 +0000 (0:00:00.394) 0:00:48.382 *********** 2025-05-06 01:01:29.653577 | orchestrator | skipping: [testbed-manager] 2025-05-06 01:01:29.653596 | orchestrator | 2025-05-06 01:01:29.653609 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-06 01:01:29.653621 | orchestrator | Tuesday 06 May 2025 01:00:49 +0000 (0:00:00.103) 0:00:48.486 *********** 2025-05-06 01:01:29.653633 | orchestrator | skipping: [testbed-manager] 2025-05-06 01:01:29.653646 | orchestrator | 2025-05-06 01:01:29.653658 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-06 01:01:29.653670 | orchestrator | Tuesday 06 May 2025 01:00:49 +0000 (0:00:00.255) 0:00:48.742 *********** 2025-05-06 01:01:29.653682 | orchestrator | changed: [testbed-manager] 2025-05-06 01:01:29.653695 | orchestrator | 2025-05-06 01:01:29.653707 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-06 01:01:29.653719 | orchestrator | Tuesday 06 May 2025 01:00:50 +0000 (0:00:01.292) 0:00:50.034 *********** 2025-05-06 01:01:29.653732 | orchestrator | changed: [testbed-manager] 2025-05-06 01:01:29.653744 | orchestrator | 2025-05-06 01:01:29.653756 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-06 01:01:29.653769 | orchestrator | Tuesday 06 May 2025 01:00:51 +0000 (0:00:00.782) 0:00:50.817 *********** 2025-05-06 01:01:29.653781 | orchestrator | changed: [testbed-manager] 2025-05-06 01:01:29.653798 | orchestrator | 2025-05-06 01:01:29.653810 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-06 01:01:29.653823 | orchestrator | Tuesday 06 May 2025 01:00:52 +0000 (0:00:00.479) 0:00:51.297 *********** 2025-05-06 01:01:29.653835 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-06 01:01:29.653847 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-06 01:01:29.653940 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-06 01:01:29.653958 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-06 01:01:29.653972 | orchestrator | 2025-05-06 01:01:29.653987 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 01:01:29.654001 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-06 01:01:29.654099 | orchestrator | 2025-05-06 01:01:29.654127 | orchestrator | Tuesday 06 May 2025 01:00:53 +0000 (0:00:01.148) 0:00:52.445 *********** 2025-05-06 01:01:29.654898 | orchestrator | =============================================================================== 2025-05-06 01:01:29.654929 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.82s 2025-05-06 01:01:29.654941 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.45s 2025-05-06 01:01:29.654954 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.29s 2025-05-06 01:01:29.654967 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.15s 2025-05-06 01:01:29.654979 | orchestrator | osism.services.cephclient : Create required directories ----------------- 0.95s 2025-05-06 01:01:29.654991 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 0.89s 2025-05-06 01:01:29.655003 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.84s 2025-05-06 01:01:29.655015 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.78s 2025-05-06 01:01:29.655028 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.70s 2025-05-06 01:01:29.655040 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.48s 2025-05-06 01:01:29.655052 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.39s 2025-05-06 01:01:29.655064 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.26s 2025-05-06 01:01:29.655076 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2025-05-06 01:01:29.655103 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.10s 2025-05-06 01:01:29.655115 | orchestrator | 2025-05-06 01:01:29.655128 | orchestrator | 2025-05-06 01:01:29 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:01:29.655140 | orchestrator | 2025-05-06 01:01:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:01:29.655159 | orchestrator | 2025-05-06 01:01:29 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:01:29.655688 | orchestrator | 2025-05-06 01:01:29 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:01:32.692091 | orchestrator | 2025-05-06 01:01:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:01:32.692314 | orchestrator | 2025-05-06 01:01:32 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:01:32.694961 | orchestrator | 2025-05-06 01:01:32 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:01:32.695024 | orchestrator | 2025-05-06 01:01:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:01:32.695423 | orchestrator | 2025-05-06 01:01:32 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:01:32.696037 | orchestrator | 2025-05-06 01:01:32 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:01:35.728074 | orchestrator | 2025-05-06 01:01:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:01:35.728225 | orchestrator | 2025-05-06 01:01:35 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:01:35.730736 | orchestrator | 2025-05-06 01:01:35 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:01:35.731368 | orchestrator | 2025-05-06 01:01:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:01:35.731428 | orchestrator | 2025-05-06 01:01:35 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:01:35.731467 | orchestrator | 2025-05-06 01:01:35 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:01:38.765938 | orchestrator | 2025-05-06 01:01:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:01:38.766112 | orchestrator | 2025-05-06 01:01:38 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:01:38.766597 | orchestrator | 2025-05-06 01:01:38 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:01:38.767869 | orchestrator | 2025-05-06 01:01:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:01:38.774664 | orchestrator | 2025-05-06 01:01:38 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:01:38.775449 | orchestrator | 2025-05-06 01:01:38 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:01:41.811316 | orchestrator | 2025-05-06 01:01:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:01:41.811477 | orchestrator | 2025-05-06 01:01:41 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:01:41.811811 | orchestrator | 2025-05-06 01:01:41 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:01:41.812683 | orchestrator | 2025-05-06 01:01:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:01:41.813533 | orchestrator | 2025-05-06 01:01:41 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:01:41.814283 | orchestrator | 2025-05-06 01:01:41 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:01:44.852471 | orchestrator | 2025-05-06 01:01:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:01:44.852634 | orchestrator | 2025-05-06 01:01:44 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:01:44.855139 | orchestrator | 2025-05-06 01:01:44 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:01:44.855182 | orchestrator | 2025-05-06 01:01:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:01:44.856362 | orchestrator | 2025-05-06 01:01:44 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:01:44.856888 | orchestrator | 2025-05-06 01:01:44 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:01:47.890305 | orchestrator | 2025-05-06 01:01:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:01:47.890476 | orchestrator | 2025-05-06 01:01:47 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:01:47.890823 | orchestrator | 2025-05-06 01:01:47 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:01:47.891433 | orchestrator | 2025-05-06 01:01:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:01:47.892357 | orchestrator | 2025-05-06 01:01:47 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:01:47.893517 | orchestrator | 2025-05-06 01:01:47 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:01:47.894810 | orchestrator | 2025-05-06 01:01:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:01:50.926810 | orchestrator | 2025-05-06 01:01:50 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:01:50.926926 | orchestrator | 2025-05-06 01:01:50 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:01:50.929157 | orchestrator | 2025-05-06 01:01:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:01:50.929836 | orchestrator | 2025-05-06 01:01:50 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:01:50.929881 | orchestrator | 2025-05-06 01:01:50 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:01:50.929993 | orchestrator | 2025-05-06 01:01:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:01:53.963931 | orchestrator | 2025-05-06 01:01:53 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:01:53.964120 | orchestrator | 2025-05-06 01:01:53 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:01:53.964735 | orchestrator | 2025-05-06 01:01:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:01:53.965389 | orchestrator | 2025-05-06 01:01:53 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:01:53.966666 | orchestrator | 2025-05-06 01:01:53 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:01:57.005457 | orchestrator | 2025-05-06 01:01:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:01:57.005672 | orchestrator | 2025-05-06 01:01:57 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:01:57.006126 | orchestrator | 2025-05-06 01:01:57 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:01:57.006187 | orchestrator | 2025-05-06 01:01:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:01:57.007272 | orchestrator | 2025-05-06 01:01:57 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:01:57.007915 | orchestrator | 2025-05-06 01:01:57 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state STARTED 2025-05-06 01:02:00.047857 | orchestrator | 2025-05-06 01:01:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:02:00.048067 | orchestrator | 2025-05-06 01:02:00 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:02:00.048425 | orchestrator | 2025-05-06 01:02:00 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:02:00.048461 | orchestrator | 2025-05-06 01:02:00 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:02:00.048991 | orchestrator | 2025-05-06 01:02:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:02:00.049568 | orchestrator | 2025-05-06 01:02:00 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:02:00.050478 | orchestrator | 2025-05-06 01:02:00 | INFO  | Task 08fb6601-c9f3-43d8-aaa6-458fb4621e1e is in state SUCCESS 2025-05-06 01:02:00.050875 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-06 01:02:00.050905 | orchestrator | 2025-05-06 01:02:00.050921 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-06 01:02:00.050937 | orchestrator | 2025-05-06 01:02:00.050952 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-06 01:02:00.050966 | orchestrator | Tuesday 06 May 2025 01:00:56 +0000 (0:00:00.330) 0:00:00.330 *********** 2025-05-06 01:02:00.050981 | orchestrator | changed: [testbed-manager] 2025-05-06 01:02:00.051013 | orchestrator | 2025-05-06 01:02:00.051029 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-06 01:02:00.051044 | orchestrator | Tuesday 06 May 2025 01:00:57 +0000 (0:00:01.131) 0:00:01.462 *********** 2025-05-06 01:02:00.051058 | orchestrator | changed: [testbed-manager] 2025-05-06 01:02:00.051074 | orchestrator | 2025-05-06 01:02:00.051089 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-06 01:02:00.051103 | orchestrator | Tuesday 06 May 2025 01:00:58 +0000 (0:00:01.005) 0:00:02.467 *********** 2025-05-06 01:02:00.051118 | orchestrator | changed: [testbed-manager] 2025-05-06 01:02:00.051133 | orchestrator | 2025-05-06 01:02:00.051232 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-06 01:02:00.051254 | orchestrator | Tuesday 06 May 2025 01:00:59 +0000 (0:00:00.810) 0:00:03.277 *********** 2025-05-06 01:02:00.051269 | orchestrator | changed: [testbed-manager] 2025-05-06 01:02:00.051285 | orchestrator | 2025-05-06 01:02:00.051318 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-06 01:02:00.051341 | orchestrator | Tuesday 06 May 2025 01:01:00 +0000 (0:00:00.809) 0:00:04.087 *********** 2025-05-06 01:02:00.051358 | orchestrator | changed: [testbed-manager] 2025-05-06 01:02:00.051372 | orchestrator | 2025-05-06 01:02:00.051387 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-06 01:02:00.051401 | orchestrator | Tuesday 06 May 2025 01:01:01 +0000 (0:00:00.905) 0:00:04.992 *********** 2025-05-06 01:02:00.051416 | orchestrator | changed: [testbed-manager] 2025-05-06 01:02:00.051431 | orchestrator | 2025-05-06 01:02:00.051446 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-06 01:02:00.051460 | orchestrator | Tuesday 06 May 2025 01:01:01 +0000 (0:00:00.816) 0:00:05.809 *********** 2025-05-06 01:02:00.051475 | orchestrator | changed: [testbed-manager] 2025-05-06 01:02:00.051489 | orchestrator | 2025-05-06 01:02:00.051504 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-06 01:02:00.051539 | orchestrator | Tuesday 06 May 2025 01:01:03 +0000 (0:00:01.289) 0:00:07.099 *********** 2025-05-06 01:02:00.051553 | orchestrator | changed: [testbed-manager] 2025-05-06 01:02:00.051567 | orchestrator | 2025-05-06 01:02:00.051581 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-06 01:02:00.051595 | orchestrator | Tuesday 06 May 2025 01:01:04 +0000 (0:00:01.284) 0:00:08.384 *********** 2025-05-06 01:02:00.051608 | orchestrator | changed: [testbed-manager] 2025-05-06 01:02:00.051622 | orchestrator | 2025-05-06 01:02:00.051636 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-06 01:02:00.051650 | orchestrator | Tuesday 06 May 2025 01:01:22 +0000 (0:00:17.878) 0:00:26.262 *********** 2025-05-06 01:02:00.051664 | orchestrator | skipping: [testbed-manager] 2025-05-06 01:02:00.051678 | orchestrator | 2025-05-06 01:02:00.051692 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-06 01:02:00.051705 | orchestrator | 2025-05-06 01:02:00.051719 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-06 01:02:00.051733 | orchestrator | Tuesday 06 May 2025 01:01:23 +0000 (0:00:00.678) 0:00:26.940 *********** 2025-05-06 01:02:00.051746 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:02:00.051760 | orchestrator | 2025-05-06 01:02:00.051774 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-06 01:02:00.051788 | orchestrator | 2025-05-06 01:02:00.051802 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-06 01:02:00.051815 | orchestrator | Tuesday 06 May 2025 01:01:25 +0000 (0:00:02.119) 0:00:29.060 *********** 2025-05-06 01:02:00.051829 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:02:00.051843 | orchestrator | 2025-05-06 01:02:00.051856 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-06 01:02:00.051870 | orchestrator | 2025-05-06 01:02:00.051884 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-06 01:02:00.051898 | orchestrator | Tuesday 06 May 2025 01:01:26 +0000 (0:00:01.746) 0:00:30.806 *********** 2025-05-06 01:02:00.051911 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:02:00.051925 | orchestrator | 2025-05-06 01:02:00.051939 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 01:02:00.051954 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-06 01:02:00.051969 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:02:00.051983 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:02:00.051997 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:02:00.052011 | orchestrator | 2025-05-06 01:02:00.052024 | orchestrator | 2025-05-06 01:02:00.052038 | orchestrator | 2025-05-06 01:02:00.052052 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 01:02:00.052066 | orchestrator | Tuesday 06 May 2025 01:01:28 +0000 (0:00:01.460) 0:00:32.267 *********** 2025-05-06 01:02:00.052079 | orchestrator | =============================================================================== 2025-05-06 01:02:00.052094 | orchestrator | Create admin user ------------------------------------------------------ 17.88s 2025-05-06 01:02:00.052118 | orchestrator | Restart ceph manager service -------------------------------------------- 5.33s 2025-05-06 01:02:00.053969 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.29s 2025-05-06 01:02:00.053998 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.28s 2025-05-06 01:02:00.054010 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.13s 2025-05-06 01:02:00.054291 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.01s 2025-05-06 01:02:00.054342 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.91s 2025-05-06 01:02:00.054355 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.82s 2025-05-06 01:02:00.054375 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.81s 2025-05-06 01:02:00.054388 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.81s 2025-05-06 01:02:00.054400 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.68s 2025-05-06 01:02:00.054413 | orchestrator | 2025-05-06 01:02:00.054719 | orchestrator | 2025-05-06 01:02:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:02:00.054764 | orchestrator | 2025-05-06 01:02:00.054779 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 01:02:00.054798 | orchestrator | 2025-05-06 01:02:00.054811 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 01:02:00.054823 | orchestrator | Tuesday 06 May 2025 00:59:57 +0000 (0:00:00.345) 0:00:00.345 *********** 2025-05-06 01:02:00.054836 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:02:00.054849 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:02:00.054861 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:02:00.054874 | orchestrator | 2025-05-06 01:02:00.054886 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 01:02:00.054898 | orchestrator | Tuesday 06 May 2025 00:59:57 +0000 (0:00:00.325) 0:00:00.670 *********** 2025-05-06 01:02:00.054911 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-06 01:02:00.054923 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-06 01:02:00.054936 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-06 01:02:00.054948 | orchestrator | 2025-05-06 01:02:00.054960 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-06 01:02:00.054972 | orchestrator | 2025-05-06 01:02:00.054985 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-06 01:02:00.054997 | orchestrator | Tuesday 06 May 2025 00:59:58 +0000 (0:00:00.426) 0:00:01.097 *********** 2025-05-06 01:02:00.055010 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:02:00.055023 | orchestrator | 2025-05-06 01:02:00.055035 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-06 01:02:00.055047 | orchestrator | Tuesday 06 May 2025 00:59:59 +0000 (0:00:00.905) 0:00:02.002 *********** 2025-05-06 01:02:00.055060 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-05-06 01:02:00.055072 | orchestrator | 2025-05-06 01:02:00.055084 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-05-06 01:02:00.055097 | orchestrator | Tuesday 06 May 2025 01:00:02 +0000 (0:00:03.735) 0:00:05.738 *********** 2025-05-06 01:02:00.055109 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-05-06 01:02:00.055121 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-05-06 01:02:00.055134 | orchestrator | 2025-05-06 01:02:00.055146 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-05-06 01:02:00.055158 | orchestrator | Tuesday 06 May 2025 01:00:09 +0000 (0:00:06.652) 0:00:12.390 *********** 2025-05-06 01:02:00.055170 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-05-06 01:02:00.055183 | orchestrator | 2025-05-06 01:02:00.055195 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-05-06 01:02:00.055207 | orchestrator | Tuesday 06 May 2025 01:00:13 +0000 (0:00:03.580) 0:00:15.971 *********** 2025-05-06 01:02:00.055219 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-06 01:02:00.055232 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-05-06 01:02:00.055244 | orchestrator | 2025-05-06 01:02:00.055256 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-05-06 01:02:00.055277 | orchestrator | Tuesday 06 May 2025 01:00:17 +0000 (0:00:03.913) 0:00:19.884 *********** 2025-05-06 01:02:00.055290 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-06 01:02:00.055350 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-05-06 01:02:00.055368 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-05-06 01:02:00.055384 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-05-06 01:02:00.055398 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-05-06 01:02:00.055412 | orchestrator | 2025-05-06 01:02:00.055428 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-05-06 01:02:00.055442 | orchestrator | Tuesday 06 May 2025 01:00:33 +0000 (0:00:16.483) 0:00:36.368 *********** 2025-05-06 01:02:00.055457 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-05-06 01:02:00.055472 | orchestrator | 2025-05-06 01:02:00.055487 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-05-06 01:02:00.055502 | orchestrator | Tuesday 06 May 2025 01:00:38 +0000 (0:00:04.797) 0:00:41.165 *********** 2025-05-06 01:02:00.055519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-06 01:02:00.055548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-06 01:02:00.055566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-06 01:02:00.055592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.055609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.055625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.055648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.055669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.055689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.055708 | orchestrator | 2025-05-06 01:02:00.055720 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-05-06 01:02:00.055733 | orchestrator | Tuesday 06 May 2025 01:00:41 +0000 (0:00:03.485) 0:00:44.651 *********** 2025-05-06 01:02:00.055746 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-05-06 01:02:00.055758 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-05-06 01:02:00.055770 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-05-06 01:02:00.055782 | orchestrator | 2025-05-06 01:02:00.055795 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-05-06 01:02:00.055807 | orchestrator | Tuesday 06 May 2025 01:00:43 +0000 (0:00:02.127) 0:00:46.778 *********** 2025-05-06 01:02:00.055819 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:02:00.055832 | orchestrator | 2025-05-06 01:02:00.055844 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-05-06 01:02:00.055857 | orchestrator | Tuesday 06 May 2025 01:00:44 +0000 (0:00:00.102) 0:00:46.880 *********** 2025-05-06 01:02:00.055869 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:02:00.055881 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:02:00.055893 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:02:00.055905 | orchestrator | 2025-05-06 01:02:00.055918 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-06 01:02:00.055936 | orchestrator | Tuesday 06 May 2025 01:00:44 +0000 (0:00:00.416) 0:00:47.297 *********** 2025-05-06 01:02:00.055948 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:02:00.055961 | orchestrator | 2025-05-06 01:02:00.055973 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-05-06 01:02:00.055986 | orchestrator | Tuesday 06 May 2025 01:00:44 +0000 (0:00:00.521) 0:00:47.819 *********** 2025-05-06 01:02:00.055999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-06 01:02:00.056019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-06 01:02:00.056042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-06 01:02:00.056056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.056069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.056082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.056101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.056114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.056133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.056145 | orchestrator | 2025-05-06 01:02:00.056158 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-05-06 01:02:00.056171 | orchestrator | Tuesday 06 May 2025 01:00:48 +0000 (0:00:03.821) 0:00:51.640 *********** 2025-05-06 01:02:00.056184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-06 01:02:00.056198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-06 01:02:00.056217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-06 01:02:00.056230 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:02:00.056243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-06 01:02:00.056262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-06 01:02:00.056276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-06 01:02:00.056288 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:02:00.056315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-06 01:02:00.056336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-06 01:02:00.056349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-06 01:02:00.056367 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:02:00.056380 | orchestrator | 2025-05-06 01:02:00.056393 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-05-06 01:02:00.056406 | orchestrator | Tuesday 06 May 2025 01:00:51 +0000 (0:00:02.676) 0:00:54.316 *********** 2025-05-06 01:02:00.056418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-06 01:02:00.056432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-06 01:02:00.056444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-06 01:02:00.056457 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:02:00.056475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-06 01:02:00.056503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-06 01:02:00.056517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-06 01:02:00.056529 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:02:00.056542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-06 01:02:00.056556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-06 01:02:00.056569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-06 01:02:00.056587 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:02:00.056600 | orchestrator | 2025-05-06 01:02:00.056613 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-05-06 01:02:00.056630 | orchestrator | Tuesday 06 May 2025 01:00:52 +0000 (0:00:01.310) 0:00:55.627 *********** 2025-05-06 01:02:00.056644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-06 01:02:00.056657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-06 01:02:00.056670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-06 01:02:00.056684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.056709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.056722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.056735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.056748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.056761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.056774 | orchestrator | 2025-05-06 01:02:00.056786 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-05-06 01:02:00.056799 | orchestrator | Tuesday 06 May 2025 01:00:56 +0000 (0:00:04.134) 0:00:59.761 *********** 2025-05-06 01:02:00.056811 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:02:00.056824 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:02:00.056836 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:02:00.056849 | orchestrator | 2025-05-06 01:02:00.056861 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-05-06 01:02:00.056874 | orchestrator | Tuesday 06 May 2025 01:00:59 +0000 (0:00:02.653) 0:01:02.415 *********** 2025-05-06 01:02:00.056891 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-06 01:02:00.056903 | orchestrator | 2025-05-06 01:02:00.056916 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-05-06 01:02:00.056928 | orchestrator | Tuesday 06 May 2025 01:01:01 +0000 (0:00:02.102) 0:01:04.517 *********** 2025-05-06 01:02:00.056940 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:02:00.056953 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:02:00.056965 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:02:00.056977 | orchestrator | 2025-05-06 01:02:00.056990 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-05-06 01:02:00.057002 | orchestrator | Tuesday 06 May 2025 01:01:03 +0000 (0:00:01.865) 0:01:06.383 *********** 2025-05-06 01:02:00.057028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-06 01:02:00.057044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-06 01:02:00.057058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-06 01:02:00.057071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.057090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.057108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.057121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.057134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.057147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.057160 | orchestrator | 2025-05-06 01:02:00.057172 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-05-06 01:02:00.057185 | orchestrator | Tuesday 06 May 2025 01:01:13 +0000 (0:00:10.465) 0:01:16.848 *********** 2025-05-06 01:02:00.057203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-06 01:02:00.057223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-06 01:02:00.057236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-06 01:02:00.057249 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:02:00.057262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-06 01:02:00.057275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-06 01:02:00.057293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-06 01:02:00.057342 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:02:00.057362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-06 01:02:00.057376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-06 01:02:00.057389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-06 01:02:00.057402 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:02:00.057415 | orchestrator | 2025-05-06 01:02:00.057427 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-05-06 01:02:00.057440 | orchestrator | Tuesday 06 May 2025 01:01:15 +0000 (0:00:01.119) 0:01:17.968 *********** 2025-05-06 01:02:00.057454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-06 01:02:00.057474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-06 01:02:00.057521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-06 01:02:00.057536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.057549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.057562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.057582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.057594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.057622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:02:00.057637 | orchestrator | 2025-05-06 01:02:00.057649 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-06 01:02:00.057662 | orchestrator | Tuesday 06 May 2025 01:01:18 +0000 (0:00:03.318) 0:01:21.286 *********** 2025-05-06 01:02:00.057674 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:02:00.057686 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:02:00.057698 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:02:00.057711 | orchestrator | 2025-05-06 01:02:00.057723 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-05-06 01:02:00.057735 | orchestrator | Tuesday 06 May 2025 01:01:19 +0000 (0:00:00.617) 0:01:21.904 *********** 2025-05-06 01:02:00.057747 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:02:00.057764 | orchestrator | 2025-05-06 01:02:00.057776 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-05-06 01:02:00.057789 | orchestrator | Tuesday 06 May 2025 01:01:21 +0000 (0:00:02.677) 0:01:24.581 *********** 2025-05-06 01:02:00.057801 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:02:00.057813 | orchestrator | 2025-05-06 01:02:00.057826 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-05-06 01:02:00.057838 | orchestrator | Tuesday 06 May 2025 01:01:24 +0000 (0:00:02.351) 0:01:26.933 *********** 2025-05-06 01:02:00.057850 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:02:00.057862 | orchestrator | 2025-05-06 01:02:00.057874 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-06 01:02:00.057892 | orchestrator | Tuesday 06 May 2025 01:01:35 +0000 (0:00:11.259) 0:01:38.193 *********** 2025-05-06 01:02:00.057904 | orchestrator | 2025-05-06 01:02:00.057916 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-06 01:02:00.057933 | orchestrator | Tuesday 06 May 2025 01:01:35 +0000 (0:00:00.104) 0:01:38.297 *********** 2025-05-06 01:02:00.057946 | orchestrator | 2025-05-06 01:02:00.057958 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-06 01:02:00.057971 | orchestrator | Tuesday 06 May 2025 01:01:35 +0000 (0:00:00.308) 0:01:38.606 *********** 2025-05-06 01:02:00.057983 | orchestrator | 2025-05-06 01:02:00.057995 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-05-06 01:02:00.058007 | orchestrator | Tuesday 06 May 2025 01:01:35 +0000 (0:00:00.116) 0:01:38.723 *********** 2025-05-06 01:02:00.058064 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:02:00.058080 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:02:00.058092 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:02:00.058105 | orchestrator | 2025-05-06 01:02:00.058117 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-05-06 01:02:00.058129 | orchestrator | Tuesday 06 May 2025 01:01:47 +0000 (0:00:11.463) 0:01:50.186 *********** 2025-05-06 01:02:00.058141 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:02:00.058154 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:02:00.058166 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:02:00.058178 | orchestrator | 2025-05-06 01:02:00.058190 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-05-06 01:02:00.058202 | orchestrator | Tuesday 06 May 2025 01:01:52 +0000 (0:00:05.167) 0:01:55.354 *********** 2025-05-06 01:02:00.058215 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:02:00.058227 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:02:00.058239 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:02:00.058251 | orchestrator | 2025-05-06 01:02:00.058264 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 01:02:00.058276 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-06 01:02:00.058289 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-06 01:02:00.058320 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-06 01:02:00.058333 | orchestrator | 2025-05-06 01:02:00.058345 | orchestrator | 2025-05-06 01:02:00.058358 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 01:02:00.058370 | orchestrator | Tuesday 06 May 2025 01:01:58 +0000 (0:00:06.227) 0:02:01.581 *********** 2025-05-06 01:02:00.058382 | orchestrator | =============================================================================== 2025-05-06 01:02:00.058394 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.48s 2025-05-06 01:02:00.058406 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.46s 2025-05-06 01:02:00.058418 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.26s 2025-05-06 01:02:00.058430 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.47s 2025-05-06 01:02:00.058443 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.65s 2025-05-06 01:02:00.058455 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 6.23s 2025-05-06 01:02:00.058467 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.17s 2025-05-06 01:02:00.058485 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.80s 2025-05-06 01:02:03.098774 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.13s 2025-05-06 01:02:03.098913 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.91s 2025-05-06 01:02:03.098934 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.82s 2025-05-06 01:02:03.098949 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.74s 2025-05-06 01:02:03.098963 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.58s 2025-05-06 01:02:03.098978 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.49s 2025-05-06 01:02:03.098992 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.32s 2025-05-06 01:02:03.099006 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.68s 2025-05-06 01:02:03.099021 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.68s 2025-05-06 01:02:03.099036 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.65s 2025-05-06 01:02:03.099063 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.35s 2025-05-06 01:02:03.099078 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.13s 2025-05-06 01:02:03.099107 | orchestrator | 2025-05-06 01:02:03 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:02:03.099886 | orchestrator | 2025-05-06 01:02:03 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:02:03.101489 | orchestrator | 2025-05-06 01:02:03 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:02:03.105435 | orchestrator | 2025-05-06 01:02:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:02:03.106610 | orchestrator | 2025-05-06 01:02:03 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:02:03.108296 | orchestrator | 2025-05-06 01:02:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:02:06.152402 | orchestrator | 2025-05-06 01:02:06 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:02:06.153484 | orchestrator | 2025-05-06 01:02:06 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:02:06.154935 | orchestrator | 2025-05-06 01:02:06 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:02:06.156100 | orchestrator | 2025-05-06 01:02:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:02:06.157513 | orchestrator | 2025-05-06 01:02:06 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:02:06.157750 | orchestrator | 2025-05-06 01:02:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:02:09.212492 | orchestrator | 2025-05-06 01:02:09 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:02:09.213904 | orchestrator | 2025-05-06 01:02:09 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:02:09.213956 | orchestrator | 2025-05-06 01:02:09 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:02:09.214755 | orchestrator | 2025-05-06 01:02:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:02:09.217051 | orchestrator | 2025-05-06 01:02:09 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:02:12.253235 | orchestrator | 2025-05-06 01:02:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:02:12.253440 | orchestrator | 2025-05-06 01:02:12 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:02:12.255092 | orchestrator | 2025-05-06 01:02:12 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:02:12.255144 | orchestrator | 2025-05-06 01:02:12 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:02:12.255592 | orchestrator | 2025-05-06 01:02:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:02:12.256143 | orchestrator | 2025-05-06 01:02:12 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:02:15.290473 | orchestrator | 2025-05-06 01:02:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:02:15.290601 | orchestrator | 2025-05-06 01:02:15 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:02:15.291021 | orchestrator | 2025-05-06 01:02:15 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:02:15.291060 | orchestrator | 2025-05-06 01:02:15 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:02:15.291577 | orchestrator | 2025-05-06 01:02:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:02:15.292206 | orchestrator | 2025-05-06 01:02:15 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:02:18.322412 | orchestrator | 2025-05-06 01:02:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:02:18.322544 | orchestrator | 2025-05-06 01:02:18 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:02:18.323186 | orchestrator | 2025-05-06 01:02:18 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:02:18.323224 | orchestrator | 2025-05-06 01:02:18 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:02:18.323706 | orchestrator | 2025-05-06 01:02:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:02:18.324530 | orchestrator | 2025-05-06 01:02:18 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:02:21.360002 | orchestrator | 2025-05-06 01:02:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:02:21.360127 | orchestrator | 2025-05-06 01:02:21 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:02:21.361505 | orchestrator | 2025-05-06 01:02:21 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:02:21.361870 | orchestrator | 2025-05-06 01:02:21 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:02:21.362420 | orchestrator | 2025-05-06 01:02:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:02:21.363049 | orchestrator | 2025-05-06 01:02:21 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:02:21.363312 | orchestrator | 2025-05-06 01:02:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:02:24.401314 | orchestrator | 2025-05-06 01:02:24 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:02:24.402174 | orchestrator | 2025-05-06 01:02:24 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:02:24.402223 | orchestrator | 2025-05-06 01:02:24 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:02:24.403931 | orchestrator | 2025-05-06 01:02:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:02:24.405502 | orchestrator | 2025-05-06 01:02:24 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:02:27.446739 | orchestrator | 2025-05-06 01:02:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:02:27.446921 | orchestrator | 2025-05-06 01:02:27 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:02:27.447701 | orchestrator | 2025-05-06 01:02:27 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:02:27.448395 | orchestrator | 2025-05-06 01:02:27 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:02:27.449127 | orchestrator | 2025-05-06 01:02:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:02:27.450658 | orchestrator | 2025-05-06 01:02:27 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:02:30.488147 | orchestrator | 2025-05-06 01:02:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:02:30.488387 | orchestrator | 2025-05-06 01:02:30 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:02:30.488676 | orchestrator | 2025-05-06 01:02:30 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:02:30.488716 | orchestrator | 2025-05-06 01:02:30 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:02:30.489238 | orchestrator | 2025-05-06 01:02:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:02:30.490197 | orchestrator | 2025-05-06 01:02:30 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:02:33.525701 | orchestrator | 2025-05-06 01:02:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:02:33.525950 | orchestrator | 2025-05-06 01:02:33 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:02:33.526414 | orchestrator | 2025-05-06 01:02:33 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:02:33.526447 | orchestrator | 2025-05-06 01:02:33 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:02:33.526469 | orchestrator | 2025-05-06 01:02:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:02:33.528894 | orchestrator | 2025-05-06 01:02:33 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:02:36.574159 | orchestrator | 2025-05-06 01:02:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:02:36.574330 | orchestrator | 2025-05-06 01:02:36 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:02:36.575567 | orchestrator | 2025-05-06 01:02:36 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:02:36.576879 | orchestrator | 2025-05-06 01:02:36 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:02:36.578234 | orchestrator | 2025-05-06 01:02:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:02:36.579156 | orchestrator | 2025-05-06 01:02:36 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:02:36.579391 | orchestrator | 2025-05-06 01:02:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:02:39.616141 | orchestrator | 2025-05-06 01:02:39 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:02:39.618822 | orchestrator | 2025-05-06 01:02:39 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:02:39.620057 | orchestrator | 2025-05-06 01:02:39 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:02:39.621419 | orchestrator | 2025-05-06 01:02:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:02:39.622555 | orchestrator | 2025-05-06 01:02:39 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:02:42.666116 | orchestrator | 2025-05-06 01:02:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:02:42.666346 | orchestrator | 2025-05-06 01:02:42 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:02:42.668980 | orchestrator | 2025-05-06 01:02:42 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:02:42.669036 | orchestrator | 2025-05-06 01:02:42 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:02:42.671918 | orchestrator | 2025-05-06 01:02:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:02:42.673733 | orchestrator | 2025-05-06 01:02:42 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:02:45.718423 | orchestrator | 2025-05-06 01:02:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:02:45.718575 | orchestrator | 2025-05-06 01:02:45 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:02:45.720622 | orchestrator | 2025-05-06 01:02:45 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:02:45.723110 | orchestrator | 2025-05-06 01:02:45 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:02:45.728625 | orchestrator | 2025-05-06 01:02:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:02:45.730388 | orchestrator | 2025-05-06 01:02:45 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:02:45.730513 | orchestrator | 2025-05-06 01:02:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:02:48.781311 | orchestrator | 2025-05-06 01:02:48 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:02:48.781987 | orchestrator | 2025-05-06 01:02:48 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:02:48.782651 | orchestrator | 2025-05-06 01:02:48 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:02:48.783731 | orchestrator | 2025-05-06 01:02:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:02:48.784674 | orchestrator | 2025-05-06 01:02:48 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:02:48.784897 | orchestrator | 2025-05-06 01:02:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:02:51.836802 | orchestrator | 2025-05-06 01:02:51 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:02:51.839054 | orchestrator | 2025-05-06 01:02:51 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:02:51.840928 | orchestrator | 2025-05-06 01:02:51 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:02:51.843061 | orchestrator | 2025-05-06 01:02:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:02:51.844868 | orchestrator | 2025-05-06 01:02:51 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:02:54.911563 | orchestrator | 2025-05-06 01:02:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:02:54.911985 | orchestrator | 2025-05-06 01:02:54 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:02:54.913822 | orchestrator | 2025-05-06 01:02:54 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:02:54.916095 | orchestrator | 2025-05-06 01:02:54 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:02:54.917566 | orchestrator | 2025-05-06 01:02:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:02:54.919323 | orchestrator | 2025-05-06 01:02:54 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:02:54.919746 | orchestrator | 2025-05-06 01:02:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:02:57.984080 | orchestrator | 2025-05-06 01:02:57 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:02:57.986325 | orchestrator | 2025-05-06 01:02:57 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:02:57.988336 | orchestrator | 2025-05-06 01:02:57 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:02:57.989906 | orchestrator | 2025-05-06 01:02:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:02:57.992014 | orchestrator | 2025-05-06 01:02:57 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:02:57.992219 | orchestrator | 2025-05-06 01:02:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:03:01.046167 | orchestrator | 2025-05-06 01:03:01 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:03:01.047729 | orchestrator | 2025-05-06 01:03:01 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:03:01.049555 | orchestrator | 2025-05-06 01:03:01 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:03:01.051473 | orchestrator | 2025-05-06 01:03:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:03:01.052771 | orchestrator | 2025-05-06 01:03:01 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:03:04.100317 | orchestrator | 2025-05-06 01:03:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:03:04.100450 | orchestrator | 2025-05-06 01:03:04 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:03:04.102892 | orchestrator | 2025-05-06 01:03:04 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:03:04.103956 | orchestrator | 2025-05-06 01:03:04 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:03:04.105735 | orchestrator | 2025-05-06 01:03:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:03:04.106867 | orchestrator | 2025-05-06 01:03:04 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:03:07.166838 | orchestrator | 2025-05-06 01:03:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:03:07.167167 | orchestrator | 2025-05-06 01:03:07 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:03:07.168318 | orchestrator | 2025-05-06 01:03:07 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:03:07.168364 | orchestrator | 2025-05-06 01:03:07 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:03:07.169558 | orchestrator | 2025-05-06 01:03:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:03:07.170617 | orchestrator | 2025-05-06 01:03:07 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state STARTED 2025-05-06 01:03:10.217644 | orchestrator | 2025-05-06 01:03:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:03:10.217836 | orchestrator | 2025-05-06 01:03:10 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:03:10.223697 | orchestrator | 2025-05-06 01:03:10 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:03:10.229786 | orchestrator | 2025-05-06 01:03:10 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:03:10.231289 | orchestrator | 2025-05-06 01:03:10 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:03:10.232720 | orchestrator | 2025-05-06 01:03:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:03:10.235001 | orchestrator | 2025-05-06 01:03:10 | INFO  | Task 314e7c8b-f54b-4cdf-8d0d-6a728aed2637 is in state SUCCESS 2025-05-06 01:03:10.236612 | orchestrator | 2025-05-06 01:03:10.236648 | orchestrator | 2025-05-06 01:03:10.236664 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 01:03:10.236680 | orchestrator | 2025-05-06 01:03:10.236695 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 01:03:10.236710 | orchestrator | Tuesday 06 May 2025 00:59:57 +0000 (0:00:00.319) 0:00:00.319 *********** 2025-05-06 01:03:10.236724 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:03:10.236740 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:03:10.236755 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:03:10.236769 | orchestrator | 2025-05-06 01:03:10.236784 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 01:03:10.236798 | orchestrator | Tuesday 06 May 2025 00:59:57 +0000 (0:00:00.456) 0:00:00.776 *********** 2025-05-06 01:03:10.236813 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-05-06 01:03:10.236828 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-05-06 01:03:10.236843 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-05-06 01:03:10.236923 | orchestrator | 2025-05-06 01:03:10.236942 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-05-06 01:03:10.237050 | orchestrator | 2025-05-06 01:03:10.237064 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-06 01:03:10.237078 | orchestrator | Tuesday 06 May 2025 00:59:58 +0000 (0:00:00.330) 0:00:01.106 *********** 2025-05-06 01:03:10.237092 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:03:10.237107 | orchestrator | 2025-05-06 01:03:10.237121 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-05-06 01:03:10.237135 | orchestrator | Tuesday 06 May 2025 00:59:58 +0000 (0:00:00.702) 0:00:01.809 *********** 2025-05-06 01:03:10.237148 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-05-06 01:03:10.237162 | orchestrator | 2025-05-06 01:03:10.237176 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-05-06 01:03:10.237190 | orchestrator | Tuesday 06 May 2025 01:00:02 +0000 (0:00:03.512) 0:00:05.322 *********** 2025-05-06 01:03:10.237204 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-05-06 01:03:10.237218 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-05-06 01:03:10.237253 | orchestrator | 2025-05-06 01:03:10.237268 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-05-06 01:03:10.237282 | orchestrator | Tuesday 06 May 2025 01:00:09 +0000 (0:00:07.587) 0:00:12.909 *********** 2025-05-06 01:03:10.237296 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-06 01:03:10.237310 | orchestrator | 2025-05-06 01:03:10.237324 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-05-06 01:03:10.237338 | orchestrator | Tuesday 06 May 2025 01:00:13 +0000 (0:00:03.766) 0:00:16.676 *********** 2025-05-06 01:03:10.237352 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-06 01:03:10.237366 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-05-06 01:03:10.237402 | orchestrator | 2025-05-06 01:03:10.237416 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-05-06 01:03:10.237430 | orchestrator | Tuesday 06 May 2025 01:00:17 +0000 (0:00:04.013) 0:00:20.690 *********** 2025-05-06 01:03:10.237444 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-06 01:03:10.237458 | orchestrator | 2025-05-06 01:03:10.237472 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-05-06 01:03:10.237485 | orchestrator | Tuesday 06 May 2025 01:00:21 +0000 (0:00:03.343) 0:00:24.034 *********** 2025-05-06 01:03:10.237499 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-05-06 01:03:10.237513 | orchestrator | 2025-05-06 01:03:10.237527 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-05-06 01:03:10.237541 | orchestrator | Tuesday 06 May 2025 01:00:25 +0000 (0:00:04.429) 0:00:28.463 *********** 2025-05-06 01:03:10.237558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-06 01:03:10.237597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-06 01:03:10.237616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-06 01:03:10.237634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-06 01:03:10.237659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-06 01:03:10.237677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-06 01:03:10.237694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.237719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.237736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.237753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.237778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.237795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.237812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.237828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.237853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.237868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.237883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.237904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.237919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.237933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.237954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.237969 | orchestrator | 2025-05-06 01:03:10.237984 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-05-06 01:03:10.237998 | orchestrator | Tuesday 06 May 2025 01:00:28 +0000 (0:00:03.066) 0:00:31.529 *********** 2025-05-06 01:03:10.238012 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:03:10.238077 | orchestrator | 2025-05-06 01:03:10.238092 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-05-06 01:03:10.238106 | orchestrator | Tuesday 06 May 2025 01:00:28 +0000 (0:00:00.121) 0:00:31.651 *********** 2025-05-06 01:03:10.238120 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:03:10.238134 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:03:10.238148 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:03:10.238162 | orchestrator | 2025-05-06 01:03:10.238175 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-06 01:03:10.238189 | orchestrator | Tuesday 06 May 2025 01:00:29 +0000 (0:00:00.436) 0:00:32.087 *********** 2025-05-06 01:03:10.238203 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:03:10.238217 | orchestrator | 2025-05-06 01:03:10.238261 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-05-06 01:03:10.238276 | orchestrator | Tuesday 06 May 2025 01:00:29 +0000 (0:00:00.568) 0:00:32.655 *********** 2025-05-06 01:03:10.238291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-06 01:03:10.238306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-06 01:03:10.238326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-06 01:03:10.238382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-06 01:03:10.238409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-06 01:03:10.238446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-06 01:03:10.238472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.238499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.238525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.238551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.238578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.238593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.238616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.238630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.238645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.238659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.238674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.238695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.239083 | orchestrator | 2025-05-06 01:03:10.239126 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-05-06 01:03:10.239167 | orchestrator | Tuesday 06 May 2025 01:00:36 +0000 (0:00:06.566) 0:00:39.221 *********** 2025-05-06 01:03:10.239197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-06 01:03:10.239259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-06 01:03:10.239283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.239299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.239313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.239348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.239364 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:03:10.239378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-06 01:03:10.239564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-06 01:03:10.239585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.239600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.239620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.239674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.239694 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:03:10.239721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-06 01:03:10.239746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-06 01:03:10.239772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.240385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.240435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.240917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.240938 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:03:10.240952 | orchestrator | 2025-05-06 01:03:10.240965 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-05-06 01:03:10.240978 | orchestrator | Tuesday 06 May 2025 01:00:37 +0000 (0:00:01.070) 0:00:40.292 *********** 2025-05-06 01:03:10.240991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-06 01:03:10.241005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-06 01:03:10.241019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.241032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.241045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.241205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.241223 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:03:10.241266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-06 01:03:10.241279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-06 01:03:10.241293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.241306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.241331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.241401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.241422 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:03:10.241436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-06 01:03:10.241448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-06 01:03:10.241461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.241474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.241496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.241538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.241553 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:03:10.241566 | orchestrator | 2025-05-06 01:03:10.241579 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-05-06 01:03:10.241591 | orchestrator | Tuesday 06 May 2025 01:00:38 +0000 (0:00:01.552) 0:00:41.845 *********** 2025-05-06 01:03:10.241604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-06 01:03:10.241618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-06 01:03:10.241631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-06 01:03:10.241651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-06 01:03:10.241692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-06 01:03:10.241707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-06 01:03:10.241720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.241733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.241746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.241765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.241778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.241818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.241834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.241850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.241865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.241880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.241901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.241941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.241958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.241972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.241987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.242001 | orchestrator | 2025-05-06 01:03:10.242046 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-05-06 01:03:10.242063 | orchestrator | Tuesday 06 May 2025 01:00:46 +0000 (0:00:07.498) 0:00:49.343 *********** 2025-05-06 01:03:10.242078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-06 01:03:10.242100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-06 01:03:10.242144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-06 01:03:10.242162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-06 01:03:10.242175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-06 01:03:10.242188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-06 01:03:10.242207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.242220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.242284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.242301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.242314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.242326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.242346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.242359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.242377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.242440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.242468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.242490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.242504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.242524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.242538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.242550 | orchestrator | 2025-05-06 01:03:10.242563 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-05-06 01:03:10.242576 | orchestrator | Tuesday 06 May 2025 01:01:09 +0000 (0:00:23.341) 0:01:12.685 *********** 2025-05-06 01:03:10.242588 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-06 01:03:10.242601 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-06 01:03:10.242614 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-06 01:03:10.242626 | orchestrator | 2025-05-06 01:03:10.242639 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-05-06 01:03:10.242661 | orchestrator | Tuesday 06 May 2025 01:01:16 +0000 (0:00:07.110) 0:01:19.795 *********** 2025-05-06 01:03:10.242673 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-06 01:03:10.242691 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-06 01:03:10.242704 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-06 01:03:10.242716 | orchestrator | 2025-05-06 01:03:10.242729 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-05-06 01:03:10.242742 | orchestrator | Tuesday 06 May 2025 01:01:22 +0000 (0:00:05.163) 0:01:24.959 *********** 2025-05-06 01:03:10.242754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-06 01:03:10.242775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-06 01:03:10.242788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-06 01:03:10.242807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-06 01:03:10.242829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-06 01:03:10.242842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.242855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.242875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.242888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.242901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.242914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.242933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-06 01:03:10.242947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.242969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.242983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.242995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.243008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.243039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.243071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243084 | orchestrator | 2025-05-06 01:03:10.243097 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-05-06 01:03:10.243109 | orchestrator | Tuesday 06 May 2025 01:01:25 +0000 (0:00:03.560) 0:01:28.520 *********** 2025-05-06 01:03:10.243122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-06 01:03:10.243135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-06 01:03:10.243153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-06 01:03:10.243173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-06 01:03:10.243186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-06 01:03:10.243315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-06 01:03:10.243336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.243455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.243501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.243534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243546 | orchestrator | 2025-05-06 01:03:10.243558 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-06 01:03:10.243571 | orchestrator | Tuesday 06 May 2025 01:01:28 +0000 (0:00:02.797) 0:01:31.318 *********** 2025-05-06 01:03:10.243583 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:03:10.243596 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:03:10.243609 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:03:10.243621 | orchestrator | 2025-05-06 01:03:10.243633 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-05-06 01:03:10.243646 | orchestrator | Tuesday 06 May 2025 01:01:29 +0000 (0:00:01.121) 0:01:32.440 *********** 2025-05-06 01:03:10.243673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-06 01:03:10.243687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-06 01:03:10.243701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243777 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:03:10.243790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-06 01:03:10.243803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-06 01:03:10.243816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243892 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:03:10.243905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-06 01:03:10.243918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-06 01:03:10.243931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.243994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.244007 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:03:10.244019 | orchestrator | 2025-05-06 01:03:10.244032 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-05-06 01:03:10.244044 | orchestrator | Tuesday 06 May 2025 01:01:30 +0000 (0:00:01.171) 0:01:33.611 *********** 2025-05-06 01:03:10.244057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-06 01:03:10.244070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-06 01:03:10.244089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-06 01:03:10.244108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-06 01:03:10.244121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-06 01:03:10.244134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-06 01:03:10.244147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.244160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.244179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.244196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.244210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.244223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.244271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.244285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.244304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.244319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.244367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.244389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.244402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.244415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-06 01:03:10.244428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-06 01:03:10.244448 | orchestrator | 2025-05-06 01:03:10.244470 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-06 01:03:10.244492 | orchestrator | Tuesday 06 May 2025 01:01:35 +0000 (0:00:05.097) 0:01:38.709 *********** 2025-05-06 01:03:10.244514 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:03:10.244534 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:03:10.244552 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:03:10.244565 | orchestrator | 2025-05-06 01:03:10.244577 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-05-06 01:03:10.244590 | orchestrator | Tuesday 06 May 2025 01:01:36 +0000 (0:00:00.693) 0:01:39.402 *********** 2025-05-06 01:03:10.244603 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-05-06 01:03:10.244615 | orchestrator | 2025-05-06 01:03:10.244627 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-05-06 01:03:10.244639 | orchestrator | Tuesday 06 May 2025 01:01:38 +0000 (0:00:02.324) 0:01:41.726 *********** 2025-05-06 01:03:10.244651 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-06 01:03:10.244663 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-05-06 01:03:10.244676 | orchestrator | 2025-05-06 01:03:10.244688 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-05-06 01:03:10.244700 | orchestrator | Tuesday 06 May 2025 01:01:41 +0000 (0:00:02.439) 0:01:44.166 *********** 2025-05-06 01:03:10.244712 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:03:10.244725 | orchestrator | 2025-05-06 01:03:10.244737 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-06 01:03:10.244749 | orchestrator | Tuesday 06 May 2025 01:01:55 +0000 (0:00:14.280) 0:01:58.446 *********** 2025-05-06 01:03:10.244762 | orchestrator | 2025-05-06 01:03:10.244779 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-06 01:03:10.244791 | orchestrator | Tuesday 06 May 2025 01:01:55 +0000 (0:00:00.055) 0:01:58.501 *********** 2025-05-06 01:03:10.244804 | orchestrator | 2025-05-06 01:03:10.244816 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-06 01:03:10.244834 | orchestrator | Tuesday 06 May 2025 01:01:55 +0000 (0:00:00.042) 0:01:58.544 *********** 2025-05-06 01:03:10.244846 | orchestrator | 2025-05-06 01:03:10.244858 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-05-06 01:03:10.244871 | orchestrator | Tuesday 06 May 2025 01:01:55 +0000 (0:00:00.044) 0:01:58.588 *********** 2025-05-06 01:03:10.244883 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:03:10.244895 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:03:10.244908 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:03:10.244920 | orchestrator | 2025-05-06 01:03:10.244932 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-05-06 01:03:10.244944 | orchestrator | Tuesday 06 May 2025 01:02:08 +0000 (0:00:13.237) 0:02:11.825 *********** 2025-05-06 01:03:10.244957 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:03:10.244969 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:03:10.244981 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:03:10.244993 | orchestrator | 2025-05-06 01:03:10.245006 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-05-06 01:03:10.245018 | orchestrator | Tuesday 06 May 2025 01:02:15 +0000 (0:00:07.075) 0:02:18.901 *********** 2025-05-06 01:03:10.245030 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:03:10.245043 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:03:10.245055 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:03:10.245067 | orchestrator | 2025-05-06 01:03:10.245079 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-05-06 01:03:10.245102 | orchestrator | Tuesday 06 May 2025 01:02:28 +0000 (0:00:12.386) 0:02:31.288 *********** 2025-05-06 01:03:10.245115 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:03:10.245127 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:03:10.245139 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:03:10.245151 | orchestrator | 2025-05-06 01:03:10.245164 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-05-06 01:03:10.245176 | orchestrator | Tuesday 06 May 2025 01:02:39 +0000 (0:00:11.664) 0:02:42.952 *********** 2025-05-06 01:03:10.245188 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:03:10.245201 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:03:10.245213 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:03:10.245250 | orchestrator | 2025-05-06 01:03:10.245265 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-05-06 01:03:10.245277 | orchestrator | Tuesday 06 May 2025 01:02:51 +0000 (0:00:11.946) 0:02:54.899 *********** 2025-05-06 01:03:10.245290 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:03:10.245302 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:03:10.245315 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:03:10.245334 | orchestrator | 2025-05-06 01:03:10.245347 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-05-06 01:03:10.245360 | orchestrator | Tuesday 06 May 2025 01:03:02 +0000 (0:00:10.658) 0:03:05.558 *********** 2025-05-06 01:03:10.245372 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:03:10.245386 | orchestrator | 2025-05-06 01:03:10.245398 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 01:03:10.245439 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-06 01:03:10.245453 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-06 01:03:10.245466 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-06 01:03:10.245479 | orchestrator | 2025-05-06 01:03:10.245499 | orchestrator | 2025-05-06 01:03:10.245521 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 01:03:10.245542 | orchestrator | Tuesday 06 May 2025 01:03:08 +0000 (0:00:05.405) 0:03:10.963 *********** 2025-05-06 01:03:10.245565 | orchestrator | =============================================================================== 2025-05-06 01:03:10.245588 | orchestrator | designate : Copying over designate.conf -------------------------------- 23.34s 2025-05-06 01:03:10.245607 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.28s 2025-05-06 01:03:10.245620 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.24s 2025-05-06 01:03:10.245633 | orchestrator | designate : Restart designate-central container ------------------------ 12.39s 2025-05-06 01:03:10.245645 | orchestrator | designate : Restart designate-mdns container --------------------------- 11.95s 2025-05-06 01:03:10.245657 | orchestrator | designate : Restart designate-producer container ----------------------- 11.66s 2025-05-06 01:03:10.245669 | orchestrator | designate : Restart designate-worker container ------------------------- 10.66s 2025-05-06 01:03:10.245682 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.59s 2025-05-06 01:03:10.245694 | orchestrator | designate : Copying over config.json files for services ----------------- 7.50s 2025-05-06 01:03:10.245706 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.11s 2025-05-06 01:03:10.245726 | orchestrator | designate : Restart designate-api container ----------------------------- 7.08s 2025-05-06 01:03:10.245739 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.57s 2025-05-06 01:03:10.245751 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 5.41s 2025-05-06 01:03:10.245778 | orchestrator | designate : Copying over named.conf ------------------------------------- 5.16s 2025-05-06 01:03:10.245810 | orchestrator | designate : Check designate containers ---------------------------------- 5.10s 2025-05-06 01:03:10.245834 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.43s 2025-05-06 01:03:10.245854 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.01s 2025-05-06 01:03:10.245882 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.77s 2025-05-06 01:03:13.281280 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.56s 2025-05-06 01:03:13.281435 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.51s 2025-05-06 01:03:13.281458 | orchestrator | 2025-05-06 01:03:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:03:13.281492 | orchestrator | 2025-05-06 01:03:13 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:03:13.282736 | orchestrator | 2025-05-06 01:03:13 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state STARTED 2025-05-06 01:03:13.284454 | orchestrator | 2025-05-06 01:03:13 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:03:13.286108 | orchestrator | 2025-05-06 01:03:13 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:03:13.287799 | orchestrator | 2025-05-06 01:03:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:03:16.339291 | orchestrator | 2025-05-06 01:03:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:03:16.339447 | orchestrator | 2025-05-06 01:03:16 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:03:16.340925 | orchestrator | 2025-05-06 01:03:16 | INFO  | Task becfae1d-342a-4bda-8e27-5ccb811fdb00 is in state SUCCESS 2025-05-06 01:03:16.342630 | orchestrator | 2025-05-06 01:03:16.342683 | orchestrator | 2025-05-06 01:03:16.342699 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 01:03:16.342713 | orchestrator | 2025-05-06 01:03:16.342728 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 01:03:16.342743 | orchestrator | Tuesday 06 May 2025 01:02:02 +0000 (0:00:00.225) 0:00:00.225 *********** 2025-05-06 01:03:16.342757 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:03:16.342775 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:03:16.342789 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:03:16.342803 | orchestrator | 2025-05-06 01:03:16.342817 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 01:03:16.342885 | orchestrator | Tuesday 06 May 2025 01:02:03 +0000 (0:00:00.299) 0:00:00.525 *********** 2025-05-06 01:03:16.342904 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-05-06 01:03:16.342918 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-05-06 01:03:16.342932 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-05-06 01:03:16.342946 | orchestrator | 2025-05-06 01:03:16.342960 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-05-06 01:03:16.342974 | orchestrator | 2025-05-06 01:03:16.342988 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-06 01:03:16.343002 | orchestrator | Tuesday 06 May 2025 01:02:03 +0000 (0:00:00.261) 0:00:00.787 *********** 2025-05-06 01:03:16.343016 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:03:16.343032 | orchestrator | 2025-05-06 01:03:16.343046 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-05-06 01:03:16.343060 | orchestrator | Tuesday 06 May 2025 01:02:03 +0000 (0:00:00.641) 0:00:01.428 *********** 2025-05-06 01:03:16.343074 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-05-06 01:03:16.343088 | orchestrator | 2025-05-06 01:03:16.343131 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-05-06 01:03:16.343146 | orchestrator | Tuesday 06 May 2025 01:02:07 +0000 (0:00:03.562) 0:00:04.991 *********** 2025-05-06 01:03:16.343160 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-05-06 01:03:16.343175 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-05-06 01:03:16.343189 | orchestrator | 2025-05-06 01:03:16.343203 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-05-06 01:03:16.343217 | orchestrator | Tuesday 06 May 2025 01:02:14 +0000 (0:00:06.620) 0:00:11.611 *********** 2025-05-06 01:03:16.343259 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-06 01:03:16.343274 | orchestrator | 2025-05-06 01:03:16.343288 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-05-06 01:03:16.343301 | orchestrator | Tuesday 06 May 2025 01:02:17 +0000 (0:00:03.693) 0:00:15.305 *********** 2025-05-06 01:03:16.343315 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-06 01:03:16.343328 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-05-06 01:03:16.343342 | orchestrator | 2025-05-06 01:03:16.343356 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-05-06 01:03:16.343369 | orchestrator | Tuesday 06 May 2025 01:02:21 +0000 (0:00:03.844) 0:00:19.149 *********** 2025-05-06 01:03:16.343383 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-06 01:03:16.343397 | orchestrator | 2025-05-06 01:03:16.343410 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-05-06 01:03:16.343424 | orchestrator | Tuesday 06 May 2025 01:02:25 +0000 (0:00:03.318) 0:00:22.467 *********** 2025-05-06 01:03:16.343438 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-05-06 01:03:16.343451 | orchestrator | 2025-05-06 01:03:16.343465 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-06 01:03:16.343494 | orchestrator | Tuesday 06 May 2025 01:02:29 +0000 (0:00:04.611) 0:00:27.079 *********** 2025-05-06 01:03:16.343509 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:03:16.343523 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:03:16.343536 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:03:16.343550 | orchestrator | 2025-05-06 01:03:16.343564 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-05-06 01:03:16.343577 | orchestrator | Tuesday 06 May 2025 01:02:30 +0000 (0:00:00.877) 0:00:27.956 *********** 2025-05-06 01:03:16.343594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-06 01:03:16.343629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-06 01:03:16.343692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-06 01:03:16.343710 | orchestrator | 2025-05-06 01:03:16.343724 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-05-06 01:03:16.343738 | orchestrator | Tuesday 06 May 2025 01:02:32 +0000 (0:00:01.701) 0:00:29.658 *********** 2025-05-06 01:03:16.343752 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:03:16.343766 | orchestrator | 2025-05-06 01:03:16.343780 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-05-06 01:03:16.343794 | orchestrator | Tuesday 06 May 2025 01:02:32 +0000 (0:00:00.130) 0:00:29.788 *********** 2025-05-06 01:03:16.343808 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:03:16.343821 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:03:16.343835 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:03:16.343849 | orchestrator | 2025-05-06 01:03:16.343863 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-06 01:03:16.343877 | orchestrator | Tuesday 06 May 2025 01:02:32 +0000 (0:00:00.303) 0:00:30.092 *********** 2025-05-06 01:03:16.343891 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:03:16.343905 | orchestrator | 2025-05-06 01:03:16.343919 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-05-06 01:03:16.343933 | orchestrator | Tuesday 06 May 2025 01:02:33 +0000 (0:00:00.528) 0:00:30.620 *********** 2025-05-06 01:03:16.343947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-06 01:03:16.343972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-06 01:03:16.343995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-06 01:03:16.344010 | orchestrator | 2025-05-06 01:03:16.344024 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-05-06 01:03:16.344038 | orchestrator | Tuesday 06 May 2025 01:02:34 +0000 (0:00:01.580) 0:00:32.201 *********** 2025-05-06 01:03:16.344065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-06 01:03:16.344081 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:03:16.344095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-06 01:03:16.344268 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:03:16.344300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-06 01:03:16.344329 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:03:16.344343 | orchestrator | 2025-05-06 01:03:16.344357 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-05-06 01:03:16.344371 | orchestrator | Tuesday 06 May 2025 01:02:35 +0000 (0:00:00.508) 0:00:32.710 *********** 2025-05-06 01:03:16.344386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-06 01:03:16.344400 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:03:16.344430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-06 01:03:16.344446 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:03:16.344460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-06 01:03:16.344475 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:03:16.344489 | orchestrator | 2025-05-06 01:03:16.344511 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-05-06 01:03:16.344525 | orchestrator | Tuesday 06 May 2025 01:02:35 +0000 (0:00:00.697) 0:00:33.408 *********** 2025-05-06 01:03:16.344548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-06 01:03:16.344564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-06 01:03:16.344590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-06 01:03:16.344605 | orchestrator | 2025-05-06 01:03:16.344619 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-05-06 01:03:16.344633 | orchestrator | Tuesday 06 May 2025 01:02:37 +0000 (0:00:01.426) 0:00:34.834 *********** 2025-05-06 01:03:16.344647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-06 01:03:16.344669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-06 01:03:16.344692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-06 01:03:16.344707 | orchestrator | 2025-05-06 01:03:16.344722 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-05-06 01:03:16.344735 | orchestrator | Tuesday 06 May 2025 01:02:39 +0000 (0:00:02.475) 0:00:37.309 *********** 2025-05-06 01:03:16.344749 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-06 01:03:16.344764 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-06 01:03:16.344778 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-06 01:03:16.344791 | orchestrator | 2025-05-06 01:03:16.344805 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-05-06 01:03:16.344819 | orchestrator | Tuesday 06 May 2025 01:02:42 +0000 (0:00:02.397) 0:00:39.706 *********** 2025-05-06 01:03:16.344833 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:03:16.344850 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:03:16.344866 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:03:16.344881 | orchestrator | 2025-05-06 01:03:16.344898 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-05-06 01:03:16.344913 | orchestrator | Tuesday 06 May 2025 01:02:44 +0000 (0:00:02.194) 0:00:41.901 *********** 2025-05-06 01:03:16.344940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-06 01:03:16.344965 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:03:16.344982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-06 01:03:16.344999 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:03:16.345025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-06 01:03:16.345042 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:03:16.345058 | orchestrator | 2025-05-06 01:03:16.345074 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-05-06 01:03:16.345090 | orchestrator | Tuesday 06 May 2025 01:02:45 +0000 (0:00:00.702) 0:00:42.604 *********** 2025-05-06 01:03:16.345106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-06 01:03:16.345123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-06 01:03:16.345163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-06 01:03:16.345182 | orchestrator | 2025-05-06 01:03:16.345197 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-05-06 01:03:16.345211 | orchestrator | Tuesday 06 May 2025 01:02:46 +0000 (0:00:01.298) 0:00:43.902 *********** 2025-05-06 01:03:16.345385 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:03:16.345420 | orchestrator | 2025-05-06 01:03:16.345435 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-05-06 01:03:16.345448 | orchestrator | Tuesday 06 May 2025 01:02:48 +0000 (0:00:02.507) 0:00:46.410 *********** 2025-05-06 01:03:16.345463 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:03:16.345476 | orchestrator | 2025-05-06 01:03:16.345490 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-05-06 01:03:16.345512 | orchestrator | Tuesday 06 May 2025 01:02:51 +0000 (0:00:02.349) 0:00:48.759 *********** 2025-05-06 01:03:16.345539 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:03:16.345967 | orchestrator | 2025-05-06 01:03:16.345990 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-06 01:03:16.346003 | orchestrator | Tuesday 06 May 2025 01:03:04 +0000 (0:00:13.412) 0:01:02.172 *********** 2025-05-06 01:03:16.346071 | orchestrator | 2025-05-06 01:03:16.346087 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-06 01:03:16.346099 | orchestrator | Tuesday 06 May 2025 01:03:04 +0000 (0:00:00.061) 0:01:02.233 *********** 2025-05-06 01:03:16.346111 | orchestrator | 2025-05-06 01:03:16.346124 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-06 01:03:16.346136 | orchestrator | Tuesday 06 May 2025 01:03:04 +0000 (0:00:00.170) 0:01:02.404 *********** 2025-05-06 01:03:16.346148 | orchestrator | 2025-05-06 01:03:16.346160 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-05-06 01:03:16.346173 | orchestrator | Tuesday 06 May 2025 01:03:05 +0000 (0:00:00.058) 0:01:02.462 *********** 2025-05-06 01:03:16.346185 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:03:16.346197 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:03:16.346209 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:03:16.346247 | orchestrator | 2025-05-06 01:03:16.346264 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 01:03:16.346278 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-06 01:03:16.346292 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-06 01:03:16.346305 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-06 01:03:16.346337 | orchestrator | 2025-05-06 01:03:16.346349 | orchestrator | 2025-05-06 01:03:16.346361 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 01:03:16.346374 | orchestrator | Tuesday 06 May 2025 01:03:14 +0000 (0:00:09.940) 0:01:12.403 *********** 2025-05-06 01:03:16.346386 | orchestrator | =============================================================================== 2025-05-06 01:03:16.346398 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.41s 2025-05-06 01:03:16.346410 | orchestrator | placement : Restart placement-api container ----------------------------- 9.94s 2025-05-06 01:03:16.346422 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.62s 2025-05-06 01:03:16.346434 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.61s 2025-05-06 01:03:16.346447 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.84s 2025-05-06 01:03:16.346459 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.69s 2025-05-06 01:03:16.346471 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.56s 2025-05-06 01:03:16.346483 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.32s 2025-05-06 01:03:16.346495 | orchestrator | placement : Creating placement databases -------------------------------- 2.51s 2025-05-06 01:03:16.346507 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.48s 2025-05-06 01:03:16.346519 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.40s 2025-05-06 01:03:16.346532 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.35s 2025-05-06 01:03:16.346544 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.19s 2025-05-06 01:03:16.346556 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.70s 2025-05-06 01:03:16.346568 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.58s 2025-05-06 01:03:16.346581 | orchestrator | placement : Copying over config.json files for services ----------------- 1.43s 2025-05-06 01:03:16.346596 | orchestrator | placement : Check placement containers ---------------------------------- 1.30s 2025-05-06 01:03:16.346611 | orchestrator | placement : include_tasks ----------------------------------------------- 0.88s 2025-05-06 01:03:16.346626 | orchestrator | placement : Copying over existing policy file --------------------------- 0.70s 2025-05-06 01:03:16.346646 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.70s 2025-05-06 01:03:16.346662 | orchestrator | 2025-05-06 01:03:16 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:03:16.346682 | orchestrator | 2025-05-06 01:03:16 | INFO  | Task 93e17857-ecbd-4fa3-943f-6960b5e508a9 is in state STARTED 2025-05-06 01:03:16.347724 | orchestrator | 2025-05-06 01:03:16 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:03:16.349240 | orchestrator | 2025-05-06 01:03:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:03:19.402505 | orchestrator | 2025-05-06 01:03:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:03:19.402616 | orchestrator | 2025-05-06 01:03:19 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:03:19.403772 | orchestrator | 2025-05-06 01:03:19 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:03:19.405738 | orchestrator | 2025-05-06 01:03:19 | INFO  | Task 93e17857-ecbd-4fa3-943f-6960b5e508a9 is in state STARTED 2025-05-06 01:03:19.407214 | orchestrator | 2025-05-06 01:03:19 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:03:19.408987 | orchestrator | 2025-05-06 01:03:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:03:19.409096 | orchestrator | 2025-05-06 01:03:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:03:22.470739 | orchestrator | 2025-05-06 01:03:22 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:03:22.471413 | orchestrator | 2025-05-06 01:03:22 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:03:22.474080 | orchestrator | 2025-05-06 01:03:22 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:03:22.474900 | orchestrator | 2025-05-06 01:03:22 | INFO  | Task 93e17857-ecbd-4fa3-943f-6960b5e508a9 is in state SUCCESS 2025-05-06 01:03:22.476542 | orchestrator | 2025-05-06 01:03:22 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:03:22.477265 | orchestrator | 2025-05-06 01:03:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:03:25.512575 | orchestrator | 2025-05-06 01:03:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:03:25.512890 | orchestrator | 2025-05-06 01:03:25 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:03:25.513322 | orchestrator | 2025-05-06 01:03:25 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:03:25.513375 | orchestrator | 2025-05-06 01:03:25 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:03:25.514454 | orchestrator | 2025-05-06 01:03:25 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:03:25.514979 | orchestrator | 2025-05-06 01:03:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:03:28.545040 | orchestrator | 2025-05-06 01:03:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:03:28.545329 | orchestrator | 2025-05-06 01:03:28 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:03:28.545370 | orchestrator | 2025-05-06 01:03:28 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:03:28.546771 | orchestrator | 2025-05-06 01:03:28 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:03:28.547398 | orchestrator | 2025-05-06 01:03:28 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:03:28.548014 | orchestrator | 2025-05-06 01:03:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:03:31.584024 | orchestrator | 2025-05-06 01:03:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:03:31.584146 | orchestrator | 2025-05-06 01:03:31 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:03:31.585088 | orchestrator | 2025-05-06 01:03:31 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:03:31.585847 | orchestrator | 2025-05-06 01:03:31 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:03:31.586654 | orchestrator | 2025-05-06 01:03:31 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:03:31.587290 | orchestrator | 2025-05-06 01:03:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:03:34.625707 | orchestrator | 2025-05-06 01:03:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:03:34.625871 | orchestrator | 2025-05-06 01:03:34 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:03:34.626426 | orchestrator | 2025-05-06 01:03:34 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:03:34.627589 | orchestrator | 2025-05-06 01:03:34 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:03:34.628845 | orchestrator | 2025-05-06 01:03:34 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:03:34.629705 | orchestrator | 2025-05-06 01:03:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:03:34.630400 | orchestrator | 2025-05-06 01:03:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:03:37.682329 | orchestrator | 2025-05-06 01:03:37 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:03:37.683260 | orchestrator | 2025-05-06 01:03:37 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:03:37.683316 | orchestrator | 2025-05-06 01:03:37 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:03:37.685079 | orchestrator | 2025-05-06 01:03:37 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:03:37.687266 | orchestrator | 2025-05-06 01:03:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:03:40.741240 | orchestrator | 2025-05-06 01:03:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:03:40.741368 | orchestrator | 2025-05-06 01:03:40 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:03:40.742371 | orchestrator | 2025-05-06 01:03:40 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:03:40.743437 | orchestrator | 2025-05-06 01:03:40 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:03:40.744717 | orchestrator | 2025-05-06 01:03:40 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:03:40.745992 | orchestrator | 2025-05-06 01:03:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:03:43.780702 | orchestrator | 2025-05-06 01:03:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:03:43.780822 | orchestrator | 2025-05-06 01:03:43 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:03:43.781854 | orchestrator | 2025-05-06 01:03:43 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:03:43.782626 | orchestrator | 2025-05-06 01:03:43 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:03:43.783538 | orchestrator | 2025-05-06 01:03:43 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:03:43.784160 | orchestrator | 2025-05-06 01:03:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:03:43.784302 | orchestrator | 2025-05-06 01:03:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:03:46.826417 | orchestrator | 2025-05-06 01:03:46 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:03:46.829566 | orchestrator | 2025-05-06 01:03:46 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:03:46.832943 | orchestrator | 2025-05-06 01:03:46 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:03:46.834695 | orchestrator | 2025-05-06 01:03:46 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:03:46.834736 | orchestrator | 2025-05-06 01:03:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:03:49.907753 | orchestrator | 2025-05-06 01:03:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:03:49.907895 | orchestrator | 2025-05-06 01:03:49 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:03:49.909442 | orchestrator | 2025-05-06 01:03:49 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:03:49.910476 | orchestrator | 2025-05-06 01:03:49 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:03:49.910529 | orchestrator | 2025-05-06 01:03:49 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:03:49.911468 | orchestrator | 2025-05-06 01:03:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:03:52.949682 | orchestrator | 2025-05-06 01:03:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:03:52.949805 | orchestrator | 2025-05-06 01:03:52 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:03:52.950392 | orchestrator | 2025-05-06 01:03:52 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:03:52.950983 | orchestrator | 2025-05-06 01:03:52 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:03:52.951616 | orchestrator | 2025-05-06 01:03:52 | INFO  | Task b02e3dc6-ff30-4917-a695-bd74c5014de4 is in state STARTED 2025-05-06 01:03:52.952599 | orchestrator | 2025-05-06 01:03:52 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:03:52.953614 | orchestrator | 2025-05-06 01:03:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:03:55.990726 | orchestrator | 2025-05-06 01:03:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:03:55.990949 | orchestrator | 2025-05-06 01:03:55 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:03:55.991957 | orchestrator | 2025-05-06 01:03:55 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:03:55.992168 | orchestrator | 2025-05-06 01:03:55 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:03:55.993267 | orchestrator | 2025-05-06 01:03:55 | INFO  | Task b02e3dc6-ff30-4917-a695-bd74c5014de4 is in state STARTED 2025-05-06 01:03:55.993304 | orchestrator | 2025-05-06 01:03:55 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:03:55.993828 | orchestrator | 2025-05-06 01:03:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:03:55.994271 | orchestrator | 2025-05-06 01:03:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:03:59.033478 | orchestrator | 2025-05-06 01:03:59 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:03:59.033698 | orchestrator | 2025-05-06 01:03:59 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:03:59.034516 | orchestrator | 2025-05-06 01:03:59 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:03:59.034899 | orchestrator | 2025-05-06 01:03:59 | INFO  | Task b02e3dc6-ff30-4917-a695-bd74c5014de4 is in state STARTED 2025-05-06 01:03:59.035683 | orchestrator | 2025-05-06 01:03:59 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:03:59.036283 | orchestrator | 2025-05-06 01:03:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:04:02.078347 | orchestrator | 2025-05-06 01:03:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:04:02.078471 | orchestrator | 2025-05-06 01:04:02 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:04:02.080004 | orchestrator | 2025-05-06 01:04:02 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:04:02.080205 | orchestrator | 2025-05-06 01:04:02 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:04:02.080233 | orchestrator | 2025-05-06 01:04:02 | INFO  | Task b02e3dc6-ff30-4917-a695-bd74c5014de4 is in state SUCCESS 2025-05-06 01:04:02.081851 | orchestrator | 2025-05-06 01:04:02 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:04:02.083064 | orchestrator | 2025-05-06 01:04:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:04:05.116412 | orchestrator | 2025-05-06 01:04:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:04:05.116512 | orchestrator | 2025-05-06 01:04:05 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:04:05.116898 | orchestrator | 2025-05-06 01:04:05 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:04:05.118342 | orchestrator | 2025-05-06 01:04:05 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:04:05.118801 | orchestrator | 2025-05-06 01:04:05 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:04:05.119399 | orchestrator | 2025-05-06 01:04:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:04:05.119512 | orchestrator | 2025-05-06 01:04:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:04:08.168064 | orchestrator | 2025-05-06 01:04:08 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:04:08.168556 | orchestrator | 2025-05-06 01:04:08 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:04:08.169154 | orchestrator | 2025-05-06 01:04:08 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:04:08.169678 | orchestrator | 2025-05-06 01:04:08 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:04:08.170384 | orchestrator | 2025-05-06 01:04:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:04:08.170862 | orchestrator | 2025-05-06 01:04:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:04:11.196574 | orchestrator | 2025-05-06 01:04:11 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:04:11.197884 | orchestrator | 2025-05-06 01:04:11 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:04:11.199006 | orchestrator | 2025-05-06 01:04:11 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:04:11.200753 | orchestrator | 2025-05-06 01:04:11 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:04:11.201704 | orchestrator | 2025-05-06 01:04:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:04:11.202315 | orchestrator | 2025-05-06 01:04:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:04:14.243100 | orchestrator | 2025-05-06 01:04:14 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:04:14.243762 | orchestrator | 2025-05-06 01:04:14 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:04:14.243801 | orchestrator | 2025-05-06 01:04:14 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:04:14.243820 | orchestrator | 2025-05-06 01:04:14 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:04:14.243845 | orchestrator | 2025-05-06 01:04:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:04:17.286723 | orchestrator | 2025-05-06 01:04:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:04:17.286855 | orchestrator | 2025-05-06 01:04:17 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:04:17.287240 | orchestrator | 2025-05-06 01:04:17 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:04:17.288145 | orchestrator | 2025-05-06 01:04:17 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:04:17.288824 | orchestrator | 2025-05-06 01:04:17 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:04:17.291405 | orchestrator | 2025-05-06 01:04:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:04:20.322737 | orchestrator | 2025-05-06 01:04:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:04:20.322843 | orchestrator | 2025-05-06 01:04:20 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:04:20.323574 | orchestrator | 2025-05-06 01:04:20 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:04:20.324473 | orchestrator | 2025-05-06 01:04:20 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:04:20.325450 | orchestrator | 2025-05-06 01:04:20 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:04:20.326607 | orchestrator | 2025-05-06 01:04:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:04:23.383636 | orchestrator | 2025-05-06 01:04:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:04:23.383789 | orchestrator | 2025-05-06 01:04:23 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:04:23.385080 | orchestrator | 2025-05-06 01:04:23 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:04:23.386498 | orchestrator | 2025-05-06 01:04:23 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:04:23.388074 | orchestrator | 2025-05-06 01:04:23 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:04:23.389504 | orchestrator | 2025-05-06 01:04:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:04:26.441700 | orchestrator | 2025-05-06 01:04:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:04:26.441872 | orchestrator | 2025-05-06 01:04:26 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:04:26.443750 | orchestrator | 2025-05-06 01:04:26 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:04:26.443794 | orchestrator | 2025-05-06 01:04:26 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:04:26.445478 | orchestrator | 2025-05-06 01:04:26 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:04:29.496602 | orchestrator | 2025-05-06 01:04:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:04:29.496743 | orchestrator | 2025-05-06 01:04:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:04:29.496899 | orchestrator | 2025-05-06 01:04:29 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:04:29.497549 | orchestrator | 2025-05-06 01:04:29 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:04:29.497584 | orchestrator | 2025-05-06 01:04:29 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:04:29.498179 | orchestrator | 2025-05-06 01:04:29 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:04:29.499336 | orchestrator | 2025-05-06 01:04:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:04:32.535707 | orchestrator | 2025-05-06 01:04:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:04:32.535846 | orchestrator | 2025-05-06 01:04:32 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:04:32.536281 | orchestrator | 2025-05-06 01:04:32 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:04:32.537653 | orchestrator | 2025-05-06 01:04:32 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:04:32.538760 | orchestrator | 2025-05-06 01:04:32 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:04:32.539696 | orchestrator | 2025-05-06 01:04:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:04:35.600011 | orchestrator | 2025-05-06 01:04:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:04:35.600220 | orchestrator | 2025-05-06 01:04:35 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:04:35.602387 | orchestrator | 2025-05-06 01:04:35 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:04:35.603781 | orchestrator | 2025-05-06 01:04:35 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:04:35.606841 | orchestrator | 2025-05-06 01:04:35 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:04:35.609306 | orchestrator | 2025-05-06 01:04:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:04:38.666754 | orchestrator | 2025-05-06 01:04:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:04:38.666904 | orchestrator | 2025-05-06 01:04:38 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:04:38.668249 | orchestrator | 2025-05-06 01:04:38 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:04:38.670246 | orchestrator | 2025-05-06 01:04:38 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:04:38.671954 | orchestrator | 2025-05-06 01:04:38 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:04:38.674975 | orchestrator | 2025-05-06 01:04:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:04:41.747182 | orchestrator | 2025-05-06 01:04:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:04:41.747332 | orchestrator | 2025-05-06 01:04:41 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:04:41.748854 | orchestrator | 2025-05-06 01:04:41 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:04:41.749518 | orchestrator | 2025-05-06 01:04:41 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:04:41.749560 | orchestrator | 2025-05-06 01:04:41 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:04:41.750885 | orchestrator | 2025-05-06 01:04:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:04:44.792916 | orchestrator | 2025-05-06 01:04:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:04:44.793057 | orchestrator | 2025-05-06 01:04:44 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:04:44.793365 | orchestrator | 2025-05-06 01:04:44 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state STARTED 2025-05-06 01:04:44.794075 | orchestrator | 2025-05-06 01:04:44 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:04:44.794694 | orchestrator | 2025-05-06 01:04:44 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:04:44.796596 | orchestrator | 2025-05-06 01:04:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:04:47.821805 | orchestrator | 2025-05-06 01:04:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:04:47.821933 | orchestrator | 2025-05-06 01:04:47 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:04:47.827878 | orchestrator | 2025-05-06 01:04:47 | INFO  | Task bdbf7335-82be-4be7-86fc-3abfdf977382 is in state SUCCESS 2025-05-06 01:04:47.830229 | orchestrator | 2025-05-06 01:04:47.830269 | orchestrator | 2025-05-06 01:04:47.830286 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 01:04:47.830301 | orchestrator | 2025-05-06 01:04:47.830316 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 01:04:47.830330 | orchestrator | Tuesday 06 May 2025 01:03:18 +0000 (0:00:00.209) 0:00:00.209 *********** 2025-05-06 01:04:47.830470 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:04:47.830487 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:04:47.830501 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:04:47.830516 | orchestrator | 2025-05-06 01:04:47.830530 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 01:04:47.830544 | orchestrator | Tuesday 06 May 2025 01:03:18 +0000 (0:00:00.366) 0:00:00.575 *********** 2025-05-06 01:04:47.830557 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-06 01:04:47.830572 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-06 01:04:47.830880 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-06 01:04:47.830951 | orchestrator | 2025-05-06 01:04:47.830968 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-05-06 01:04:47.831033 | orchestrator | 2025-05-06 01:04:47.831052 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-05-06 01:04:47.831066 | orchestrator | Tuesday 06 May 2025 01:03:18 +0000 (0:00:00.454) 0:00:01.029 *********** 2025-05-06 01:04:47.831080 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:04:47.831094 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:04:47.831107 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:04:47.831121 | orchestrator | 2025-05-06 01:04:47.831192 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 01:04:47.831209 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:04:47.831225 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:04:47.831239 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:04:47.831253 | orchestrator | 2025-05-06 01:04:47.831267 | orchestrator | 2025-05-06 01:04:47.831830 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 01:04:47.831855 | orchestrator | Tuesday 06 May 2025 01:03:19 +0000 (0:00:00.768) 0:00:01.798 *********** 2025-05-06 01:04:47.831869 | orchestrator | =============================================================================== 2025-05-06 01:04:47.831884 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.77s 2025-05-06 01:04:47.831898 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2025-05-06 01:04:47.831912 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2025-05-06 01:04:47.831926 | orchestrator | 2025-05-06 01:04:47.831940 | orchestrator | None 2025-05-06 01:04:47.832042 | orchestrator | 2025-05-06 01:04:47.832189 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 01:04:47.832211 | orchestrator | 2025-05-06 01:04:47.832225 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 01:04:47.832240 | orchestrator | Tuesday 06 May 2025 00:59:57 +0000 (0:00:00.483) 0:00:00.483 *********** 2025-05-06 01:04:47.832254 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:04:47.832492 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:04:47.832513 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:04:47.832560 | orchestrator | ok: [testbed-node-3] 2025-05-06 01:04:47.832578 | orchestrator | ok: [testbed-node-4] 2025-05-06 01:04:47.832686 | orchestrator | ok: [testbed-node-5] 2025-05-06 01:04:47.832707 | orchestrator | 2025-05-06 01:04:47.832722 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 01:04:47.832828 | orchestrator | Tuesday 06 May 2025 00:59:58 +0000 (0:00:00.822) 0:00:01.306 *********** 2025-05-06 01:04:47.832847 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-05-06 01:04:47.832861 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-05-06 01:04:47.832875 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-05-06 01:04:47.832889 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-05-06 01:04:47.833358 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-05-06 01:04:47.833374 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-05-06 01:04:47.833387 | orchestrator | 2025-05-06 01:04:47.833399 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-05-06 01:04:47.833412 | orchestrator | 2025-05-06 01:04:47.833424 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-06 01:04:47.833679 | orchestrator | Tuesday 06 May 2025 00:59:59 +0000 (0:00:00.576) 0:00:01.883 *********** 2025-05-06 01:04:47.833697 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 01:04:47.833712 | orchestrator | 2025-05-06 01:04:47.833725 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-05-06 01:04:47.833738 | orchestrator | Tuesday 06 May 2025 01:00:00 +0000 (0:00:00.900) 0:00:02.783 *********** 2025-05-06 01:04:47.833751 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:04:47.833764 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:04:47.833777 | orchestrator | ok: [testbed-node-3] 2025-05-06 01:04:47.833791 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:04:47.833804 | orchestrator | ok: [testbed-node-4] 2025-05-06 01:04:47.833817 | orchestrator | ok: [testbed-node-5] 2025-05-06 01:04:47.833829 | orchestrator | 2025-05-06 01:04:47.833842 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-05-06 01:04:47.833855 | orchestrator | Tuesday 06 May 2025 01:00:01 +0000 (0:00:01.092) 0:00:03.876 *********** 2025-05-06 01:04:47.833868 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:04:47.833881 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:04:47.833894 | orchestrator | ok: [testbed-node-3] 2025-05-06 01:04:47.833920 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:04:47.834087 | orchestrator | ok: [testbed-node-4] 2025-05-06 01:04:47.834202 | orchestrator | ok: [testbed-node-5] 2025-05-06 01:04:47.834222 | orchestrator | 2025-05-06 01:04:47.834235 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-05-06 01:04:47.834248 | orchestrator | Tuesday 06 May 2025 01:00:02 +0000 (0:00:01.039) 0:00:04.916 *********** 2025-05-06 01:04:47.834260 | orchestrator | ok: [testbed-node-0] => { 2025-05-06 01:04:47.834273 | orchestrator |  "changed": false, 2025-05-06 01:04:47.834285 | orchestrator |  "msg": "All assertions passed" 2025-05-06 01:04:47.834565 | orchestrator | } 2025-05-06 01:04:47.834583 | orchestrator | ok: [testbed-node-1] => { 2025-05-06 01:04:47.834597 | orchestrator |  "changed": false, 2025-05-06 01:04:47.834610 | orchestrator |  "msg": "All assertions passed" 2025-05-06 01:04:47.834623 | orchestrator | } 2025-05-06 01:04:47.834651 | orchestrator | ok: [testbed-node-2] => { 2025-05-06 01:04:47.834665 | orchestrator |  "changed": false, 2025-05-06 01:04:47.834678 | orchestrator |  "msg": "All assertions passed" 2025-05-06 01:04:47.834691 | orchestrator | } 2025-05-06 01:04:47.834704 | orchestrator | ok: [testbed-node-3] => { 2025-05-06 01:04:47.834717 | orchestrator |  "changed": false, 2025-05-06 01:04:47.834731 | orchestrator |  "msg": "All assertions passed" 2025-05-06 01:04:47.834744 | orchestrator | } 2025-05-06 01:04:47.834757 | orchestrator | ok: [testbed-node-4] => { 2025-05-06 01:04:47.834770 | orchestrator |  "changed": false, 2025-05-06 01:04:47.834783 | orchestrator |  "msg": "All assertions passed" 2025-05-06 01:04:47.834796 | orchestrator | } 2025-05-06 01:04:47.834817 | orchestrator | ok: [testbed-node-5] => { 2025-05-06 01:04:47.834838 | orchestrator |  "changed": false, 2025-05-06 01:04:47.834859 | orchestrator |  "msg": "All assertions passed" 2025-05-06 01:04:47.834881 | orchestrator | } 2025-05-06 01:04:47.834894 | orchestrator | 2025-05-06 01:04:47.834979 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-05-06 01:04:47.834994 | orchestrator | Tuesday 06 May 2025 01:00:02 +0000 (0:00:00.518) 0:00:05.434 *********** 2025-05-06 01:04:47.835006 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.835019 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.835031 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.835428 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.835459 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.835472 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.835485 | orchestrator | 2025-05-06 01:04:47.835506 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-05-06 01:04:47.835519 | orchestrator | Tuesday 06 May 2025 01:00:03 +0000 (0:00:00.669) 0:00:06.103 *********** 2025-05-06 01:04:47.835531 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-05-06 01:04:47.835544 | orchestrator | 2025-05-06 01:04:47.835556 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-05-06 01:04:47.835569 | orchestrator | Tuesday 06 May 2025 01:00:07 +0000 (0:00:03.744) 0:00:09.847 *********** 2025-05-06 01:04:47.835581 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-05-06 01:04:47.835634 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-05-06 01:04:47.835650 | orchestrator | 2025-05-06 01:04:47.835662 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-05-06 01:04:47.835674 | orchestrator | Tuesday 06 May 2025 01:00:13 +0000 (0:00:06.660) 0:00:16.508 *********** 2025-05-06 01:04:47.835751 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-06 01:04:47.835767 | orchestrator | 2025-05-06 01:04:47.835780 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-05-06 01:04:47.835792 | orchestrator | Tuesday 06 May 2025 01:00:17 +0000 (0:00:03.349) 0:00:19.858 *********** 2025-05-06 01:04:47.835805 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-06 01:04:47.835817 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-05-06 01:04:47.835830 | orchestrator | 2025-05-06 01:04:47.835842 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-05-06 01:04:47.836089 | orchestrator | Tuesday 06 May 2025 01:00:21 +0000 (0:00:04.026) 0:00:23.884 *********** 2025-05-06 01:04:47.836108 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-06 01:04:47.836120 | orchestrator | 2025-05-06 01:04:47.836194 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-05-06 01:04:47.836208 | orchestrator | Tuesday 06 May 2025 01:00:24 +0000 (0:00:03.336) 0:00:27.221 *********** 2025-05-06 01:04:47.836220 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-05-06 01:04:47.836231 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-05-06 01:04:47.836241 | orchestrator | 2025-05-06 01:04:47.836262 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-06 01:04:47.836273 | orchestrator | Tuesday 06 May 2025 01:00:32 +0000 (0:00:08.317) 0:00:35.538 *********** 2025-05-06 01:04:47.836283 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.836293 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.836304 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.836314 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.836324 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.836379 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.836393 | orchestrator | 2025-05-06 01:04:47.836404 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-05-06 01:04:47.836415 | orchestrator | Tuesday 06 May 2025 01:00:33 +0000 (0:00:00.710) 0:00:36.249 *********** 2025-05-06 01:04:47.836425 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.836436 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.836446 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.836500 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.836515 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.836525 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.836535 | orchestrator | 2025-05-06 01:04:47.836545 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-05-06 01:04:47.836611 | orchestrator | Tuesday 06 May 2025 01:00:36 +0000 (0:00:02.903) 0:00:39.153 *********** 2025-05-06 01:04:47.836622 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:04:47.836633 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:04:47.836643 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:04:47.836653 | orchestrator | ok: [testbed-node-3] 2025-05-06 01:04:47.836663 | orchestrator | ok: [testbed-node-4] 2025-05-06 01:04:47.836730 | orchestrator | ok: [testbed-node-5] 2025-05-06 01:04:47.836745 | orchestrator | 2025-05-06 01:04:47.836755 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-06 01:04:47.836765 | orchestrator | Tuesday 06 May 2025 01:00:37 +0000 (0:00:01.218) 0:00:40.371 *********** 2025-05-06 01:04:47.836776 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.836786 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.836973 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.836987 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.836997 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.837051 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.837097 | orchestrator | 2025-05-06 01:04:47.837109 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-05-06 01:04:47.837187 | orchestrator | Tuesday 06 May 2025 01:00:41 +0000 (0:00:03.934) 0:00:44.305 *********** 2025-05-06 01:04:47.837205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.837218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.837239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.837251 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.837342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.837361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.837372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.837384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.837402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.837639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 01:04:47.837736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.837754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.837765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 01:04:47.837795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.837806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.838058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.838078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.838089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.838176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.838202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.838213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.838334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.838351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.838367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.838379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.838402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.838463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 01:04:47.838475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.838721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.838741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.838761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.838773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.838784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.838796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.838869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.838956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.838977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.839001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.839013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.839025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.839088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.839114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.841999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.842057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.842078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.842096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.842210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.842223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.842232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.842278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.842289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842298 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-06 01:04:47.842307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.842316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.842372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.842382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.842391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.842400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.842443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.842452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842461 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.842481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.842522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.842540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.842564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.842576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842597 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.842644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.842664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.842675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842684 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-06 01:04:47.842704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.842732 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.842742 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.842763 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-06 01:04:47.842797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.842809 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842829 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.842839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.842850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.842884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.842894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.842904 | orchestrator | 2025-05-06 01:04:47.842913 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-05-06 01:04:47.842923 | orchestrator | Tuesday 06 May 2025 01:00:44 +0000 (0:00:03.205) 0:00:47.511 *********** 2025-05-06 01:04:47.842931 | orchestrator | [WARNING]: Skipped 2025-05-06 01:04:47.842940 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-05-06 01:04:47.842949 | orchestrator | due to this access issue: 2025-05-06 01:04:47.842958 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-05-06 01:04:47.842967 | orchestrator | a directory 2025-05-06 01:04:47.842975 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-06 01:04:47.842984 | orchestrator | 2025-05-06 01:04:47.842993 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-06 01:04:47.843001 | orchestrator | Tuesday 06 May 2025 01:00:45 +0000 (0:00:00.872) 0:00:48.383 *********** 2025-05-06 01:04:47.843010 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 01:04:47.843019 | orchestrator | 2025-05-06 01:04:47.843027 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-05-06 01:04:47.843036 | orchestrator | Tuesday 06 May 2025 01:00:47 +0000 (0:00:01.588) 0:00:49.971 *********** 2025-05-06 01:04:47.843067 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-06 01:04:47.843081 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-06 01:04:47.843094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 01:04:47.843110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 01:04:47.843120 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-06 01:04:47.843175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 01:04:47.843185 | orchestrator | 2025-05-06 01:04:47.843194 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-05-06 01:04:47.843205 | orchestrator | Tuesday 06 May 2025 01:00:52 +0000 (0:00:05.279) 0:00:55.251 *********** 2025-05-06 01:04:47.843220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.843229 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.843245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.843255 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.843263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.843276 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.843285 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.843294 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.843303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.843312 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.843330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.843340 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.843349 | orchestrator | 2025-05-06 01:04:47.843358 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-05-06 01:04:47.843367 | orchestrator | Tuesday 06 May 2025 01:00:57 +0000 (0:00:04.418) 0:00:59.669 *********** 2025-05-06 01:04:47.843376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.843386 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.843395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.843408 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.843417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.843426 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.843438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.843447 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.843462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.843471 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.843480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.843495 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.843504 | orchestrator | 2025-05-06 01:04:47.843512 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-05-06 01:04:47.843521 | orchestrator | Tuesday 06 May 2025 01:01:01 +0000 (0:00:04.640) 0:01:04.310 *********** 2025-05-06 01:04:47.843529 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.843538 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.843546 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.843555 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.843563 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.843572 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.843580 | orchestrator | 2025-05-06 01:04:47.843589 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-05-06 01:04:47.843597 | orchestrator | Tuesday 06 May 2025 01:01:06 +0000 (0:00:04.606) 0:01:08.917 *********** 2025-05-06 01:04:47.843606 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.843614 | orchestrator | 2025-05-06 01:04:47.843623 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-05-06 01:04:47.843632 | orchestrator | Tuesday 06 May 2025 01:01:06 +0000 (0:00:00.082) 0:01:09.000 *********** 2025-05-06 01:04:47.843640 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.843648 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.843657 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.843665 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.843674 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.843682 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.843690 | orchestrator | 2025-05-06 01:04:47.843699 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-05-06 01:04:47.843708 | orchestrator | Tuesday 06 May 2025 01:01:07 +0000 (0:00:00.687) 0:01:09.687 *********** 2025-05-06 01:04:47.843716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.843730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.843739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.843758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.843768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.843777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.843786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.843799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.843808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.843824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.843836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.843846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.843855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.843867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.843883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.843896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.843905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.843914 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.843923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.843936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.843951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.843964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.843973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.843982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.843991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.844033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.844042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.844076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.844086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.844102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.844111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.844193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.844209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.844243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.844252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.844261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.844290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.844352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.844371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.844399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.844408 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.844425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844441 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.844450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.844459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.844510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.844520 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.844542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.844555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.844589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844610 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.844623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.844636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.844657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.844682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.844703 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.844712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.844740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.844749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.844774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.844801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.844810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.844831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.844845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844854 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.844869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.844878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.844926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.844951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.844960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.844973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.844985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.845021 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.845030 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845048 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.845062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.845071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845084 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.845093 | orchestrator | 2025-05-06 01:04:47.845102 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-05-06 01:04:47.845110 | orchestrator | Tuesday 06 May 2025 01:01:11 +0000 (0:00:04.137) 0:01:13.825 *********** 2025-05-06 01:04:47.845139 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.845149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.845194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.845219 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.845231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 01:04:47.845249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.845301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.845325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.845338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.845363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.845380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.845389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.845418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.845431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 01:04:47.845454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.845499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.845517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.845531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.845558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.845576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.845585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.845613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.845626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.845644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 01:04:47.845686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.845745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.845776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.845784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.845802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.845821 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.845844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.845862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.845880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.845906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.845925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.845934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.845966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.845994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.846003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846041 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.846060 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.846069 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846078 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-06 01:04:47.846087 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846096 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.846104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.846142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.846162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.846172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846181 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-06 01:04:47.846196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.846223 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.846232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.846256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.846269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846284 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-06 01:04:47.846293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.846311 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.846320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.846352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.846362 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846371 | orchestrator | 2025-05-06 01:04:47.846379 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-05-06 01:04:47.846388 | orchestrator | Tuesday 06 May 2025 01:01:16 +0000 (0:00:04.737) 0:01:18.563 *********** 2025-05-06 01:04:47.846397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.846406 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.846459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.846477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.846496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.846518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846543 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846556 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.846565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.846586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.846595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.846604 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.846668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.846690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.846699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846711 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-06 01:04:47.846720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.846738 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.846757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.846778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.846787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 01:04:47.846816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.846838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 2025-05-06 01:04:47 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:04:47.847030 | orchestrator | 2025-05-06 01:04:47 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:04:47.847117 | orchestrator | 2025-05-06 01:04:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:04:47.847193 | orchestrator | 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.847212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.847232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.847266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.847304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.847322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.847337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.847363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.847379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.847401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.847417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.847444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.847460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.847483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.847499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 01:04:47.847529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.847545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.847560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.847581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.847596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.847619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.847637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.847770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.847801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.847826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.847865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.847908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.847931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.847962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.847977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.847992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.848014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 01:04:47.848047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.848063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.848078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.848092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.848114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.848196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.848215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.848243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.848257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.848271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.848291 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-06 01:04:47.848315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.848329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.848350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.848364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.848377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.848390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.848408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.848435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.848450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.848463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.848477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.848494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.848514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.848537 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-06 01:04:47.848550 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.848564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.848577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.848590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.848624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.848638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.848651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.848664 | orchestrator | 2025-05-06 01:04:47.848677 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-05-06 01:04:47.848690 | orchestrator | Tuesday 06 May 2025 01:01:22 +0000 (0:00:06.455) 0:01:25.019 *********** 2025-05-06 01:04:47.848703 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.848716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.848749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.848763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.848777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.848790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.848803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.848822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.848849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.848866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.848888 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.848910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.848931 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.848954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.849102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.849115 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849145 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.849160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 01:04:47.849173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.849252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.849284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.849306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.849464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.849497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.849509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.849567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.849582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.849607 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.849681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.849717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.849730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.849771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.849797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.849810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.849855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.849868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849881 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.849894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 01:04:47.849907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.849981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.849994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.850047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.850065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.850095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.850118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.850150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.850165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.850195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.850210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.850237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.850252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.850267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.850287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.850302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.850317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.850332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.850362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.850377 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.850398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.850413 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.850435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.850449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.850467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.850480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.850492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.850511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.850533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.850547 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.850559 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.850578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 01:04:47.850591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.850609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.850630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.850644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.850662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.850676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.850694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.850707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.850720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.850742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.850755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.850773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.850786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.850805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.850824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.850841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.850854 | orchestrator | 2025-05-06 01:04:47.850867 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-05-06 01:04:47.850880 | orchestrator | Tuesday 06 May 2025 01:01:26 +0000 (0:00:03.775) 0:01:28.794 *********** 2025-05-06 01:04:47.850892 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:04:47.850905 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.850917 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.850930 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.850942 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:04:47.850954 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:04:47.850966 | orchestrator | 2025-05-06 01:04:47.850979 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-05-06 01:04:47.850991 | orchestrator | Tuesday 06 May 2025 01:01:31 +0000 (0:00:05.496) 0:01:34.291 *********** 2025-05-06 01:04:47.851009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.851032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851058 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.851098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.851178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.851193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.851231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.851269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.851283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.851320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.851333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851346 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.851364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.851383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.851454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.851469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851505 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.851531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.851565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.851583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.851615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.851626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.851645 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.851684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.851695 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.851722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.851737 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.851748 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.851794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.851809 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.851820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.851830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851841 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851852 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.851862 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.851878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 01:04:47.851898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.851992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.852007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.852018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.852034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.852054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.852065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.852117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.852145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.852166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.852177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.852192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.852203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.852252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.852274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.852286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.852296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 01:04:47.852315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.852375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.852400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.852411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.852422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.852442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.852453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.852464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.852526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.852552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.852564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.852580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.852591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.852653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.852668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.852692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.852703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 01:04:47.852720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.852731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.852792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.852807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.852827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.852844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.852855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.852865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.852926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.852952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.852963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.852980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.852991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.853001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.853062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.853078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.853089 | orchestrator | 2025-05-06 01:04:47.853100 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-05-06 01:04:47.853110 | orchestrator | Tuesday 06 May 2025 01:01:35 +0000 (0:00:03.296) 0:01:37.587 *********** 2025-05-06 01:04:47.853138 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.853150 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.853160 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.853170 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.853180 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.853190 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.853204 | orchestrator | 2025-05-06 01:04:47.853214 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-05-06 01:04:47.853224 | orchestrator | Tuesday 06 May 2025 01:01:37 +0000 (0:00:02.750) 0:01:40.338 *********** 2025-05-06 01:04:47.853234 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.853244 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.853254 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.853264 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.853274 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.853284 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.853294 | orchestrator | 2025-05-06 01:04:47.853304 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-05-06 01:04:47.853314 | orchestrator | Tuesday 06 May 2025 01:01:39 +0000 (0:00:02.208) 0:01:42.547 *********** 2025-05-06 01:04:47.853324 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.853334 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.853344 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.853353 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.853363 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.853373 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.853383 | orchestrator | 2025-05-06 01:04:47.853393 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-05-06 01:04:47.853403 | orchestrator | Tuesday 06 May 2025 01:01:42 +0000 (0:00:02.324) 0:01:44.871 *********** 2025-05-06 01:04:47.853413 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.853423 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.853433 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.853443 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.853453 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.853462 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.853472 | orchestrator | 2025-05-06 01:04:47.853483 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-05-06 01:04:47.853493 | orchestrator | Tuesday 06 May 2025 01:01:44 +0000 (0:00:01.912) 0:01:46.784 *********** 2025-05-06 01:04:47.853503 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.853513 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.853523 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.853533 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.853542 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.853552 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.853562 | orchestrator | 2025-05-06 01:04:47.853573 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-05-06 01:04:47.853585 | orchestrator | Tuesday 06 May 2025 01:01:46 +0000 (0:00:01.936) 0:01:48.720 *********** 2025-05-06 01:04:47.853595 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.853606 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.853615 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.853625 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.853636 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.853646 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.853656 | orchestrator | 2025-05-06 01:04:47.853666 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-05-06 01:04:47.853676 | orchestrator | Tuesday 06 May 2025 01:01:48 +0000 (0:00:02.381) 0:01:51.101 *********** 2025-05-06 01:04:47.853686 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-06 01:04:47.853702 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.853714 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-06 01:04:47.853726 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.853739 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-06 01:04:47.853750 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.853762 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-06 01:04:47.853774 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.853785 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-06 01:04:47.853798 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.853810 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-06 01:04:47.853878 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.853903 | orchestrator | 2025-05-06 01:04:47.853923 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-05-06 01:04:47.853942 | orchestrator | Tuesday 06 May 2025 01:01:50 +0000 (0:00:02.161) 0:01:53.263 *********** 2025-05-06 01:04:47.853978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.854001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.854033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.854048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.854178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.854211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.854224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.854235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.854246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.854257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.854273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.854354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.854381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.854398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.854417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.854427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.854445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.854454 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.854512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.854534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.854545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.854554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.854569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.854620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.854636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.854663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.854680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.854695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.854719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.854735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.854804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.854817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.854837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.854847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.854861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.854920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.854934 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.854943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.854952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.854961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.854975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.854984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.855047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.855065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.855087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.855105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.855174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.855209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.855223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855232 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.855241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.855300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855315 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.855347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.855443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.855486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.855512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.855530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.855540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.855619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.855633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855642 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.855651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.855661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855719 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.855760 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.855769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855856 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.855875 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.855884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.855942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.855971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.855980 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.855989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.856006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856058 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.856071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856086 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.856104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.856113 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.856146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.856215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.856230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.856259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.856278 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.856336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.856351 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856360 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.856369 | orchestrator | 2025-05-06 01:04:47.856378 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-05-06 01:04:47.856387 | orchestrator | Tuesday 06 May 2025 01:01:52 +0000 (0:00:02.084) 0:01:55.347 *********** 2025-05-06 01:04:47.856397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.856414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.856505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.856527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.856543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.856613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.856632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.856641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.856714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.856727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.856745 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.856754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.856850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.856868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.856885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.856950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.856971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.856980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.856994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.857014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.857099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.857164 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.857177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.857186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.857195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.857224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.857287 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.857300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.857309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.857318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.857327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.857349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.857402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.857414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.857423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.857432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.857450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.857464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.857513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.857525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.857533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.857542 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.857550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.857570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.857578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.857628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.857639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.857648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.857663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.857676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.857685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.857732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.857744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.857753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.857761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.857781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.857790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.857839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.857851 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.857859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.857868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.857888 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.857901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.857974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.857989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.857998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.858007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.858063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.858075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.858084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.858153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.858173 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.858204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.858226 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.858240 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.858253 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.858267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.858314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.858332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.858364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.858374 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.858383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.858411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.858421 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.858430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.858449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.858458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.858466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.858474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.858500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.858510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.858528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.858537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.858545 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.858553 | orchestrator | 2025-05-06 01:04:47.858561 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-05-06 01:04:47.858569 | orchestrator | Tuesday 06 May 2025 01:01:55 +0000 (0:00:02.945) 0:01:58.293 *********** 2025-05-06 01:04:47.858577 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.858585 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.858593 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.858601 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.858615 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.858624 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.858632 | orchestrator | 2025-05-06 01:04:47.858640 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-05-06 01:04:47.858648 | orchestrator | Tuesday 06 May 2025 01:01:58 +0000 (0:00:02.607) 0:02:00.901 *********** 2025-05-06 01:04:47.858656 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.858664 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.858672 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.858680 | orchestrator | changed: [testbed-node-4] 2025-05-06 01:04:47.858688 | orchestrator | changed: [testbed-node-3] 2025-05-06 01:04:47.858695 | orchestrator | changed: [testbed-node-5] 2025-05-06 01:04:47.858703 | orchestrator | 2025-05-06 01:04:47.858711 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-05-06 01:04:47.858719 | orchestrator | Tuesday 06 May 2025 01:02:03 +0000 (0:00:05.488) 0:02:06.390 *********** 2025-05-06 01:04:47.858727 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.858735 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.858743 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.858751 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.858758 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.858767 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.858776 | orchestrator | 2025-05-06 01:04:47.858786 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-05-06 01:04:47.858795 | orchestrator | Tuesday 06 May 2025 01:02:05 +0000 (0:00:02.075) 0:02:08.465 *********** 2025-05-06 01:04:47.858808 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.858817 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.858826 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.858853 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.858863 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.858873 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.858882 | orchestrator | 2025-05-06 01:04:47.858892 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-05-06 01:04:47.858901 | orchestrator | Tuesday 06 May 2025 01:02:08 +0000 (0:00:02.172) 0:02:10.638 *********** 2025-05-06 01:04:47.858911 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.858919 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.858927 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.858935 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.858942 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.858950 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.858958 | orchestrator | 2025-05-06 01:04:47.858966 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-05-06 01:04:47.858975 | orchestrator | Tuesday 06 May 2025 01:02:11 +0000 (0:00:03.422) 0:02:14.060 *********** 2025-05-06 01:04:47.858982 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.858990 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.858998 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.859006 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.859014 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.859022 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.859030 | orchestrator | 2025-05-06 01:04:47.859038 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-05-06 01:04:47.859046 | orchestrator | Tuesday 06 May 2025 01:02:14 +0000 (0:00:02.702) 0:02:16.763 *********** 2025-05-06 01:04:47.859054 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.859062 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.859070 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.859078 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.859086 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.859094 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.859102 | orchestrator | 2025-05-06 01:04:47.859110 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-05-06 01:04:47.859118 | orchestrator | Tuesday 06 May 2025 01:02:15 +0000 (0:00:01.784) 0:02:18.547 *********** 2025-05-06 01:04:47.859161 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.859170 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.859178 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.859186 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.859194 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.859202 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.859210 | orchestrator | 2025-05-06 01:04:47.859218 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-05-06 01:04:47.859227 | orchestrator | Tuesday 06 May 2025 01:02:20 +0000 (0:00:04.455) 0:02:23.003 *********** 2025-05-06 01:04:47.859234 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.859242 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.859250 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.859257 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.859264 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.859271 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.859281 | orchestrator | 2025-05-06 01:04:47.859289 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-05-06 01:04:47.859296 | orchestrator | Tuesday 06 May 2025 01:02:22 +0000 (0:00:01.872) 0:02:24.876 *********** 2025-05-06 01:04:47.859303 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.859310 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.859317 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.859328 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.859335 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.859342 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.859349 | orchestrator | 2025-05-06 01:04:47.859356 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-05-06 01:04:47.859363 | orchestrator | Tuesday 06 May 2025 01:02:24 +0000 (0:00:01.844) 0:02:26.721 *********** 2025-05-06 01:04:47.859370 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-06 01:04:47.859378 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.859385 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-06 01:04:47.859392 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.859399 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-06 01:04:47.859406 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.859536 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-06 01:04:47.859552 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.859564 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-06 01:04:47.859577 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.859590 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-06 01:04:47.859602 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.859615 | orchestrator | 2025-05-06 01:04:47.859629 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-05-06 01:04:47.859642 | orchestrator | Tuesday 06 May 2025 01:02:26 +0000 (0:00:02.348) 0:02:29.069 *********** 2025-05-06 01:04:47.859674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.859684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.859691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.859704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.859712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.859733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.859742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.859750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.859757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.859768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.859776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.859783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.859804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.859812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.859820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.859831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.859839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.859846 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.859853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.859875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.859883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.859894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.859902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.859909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.859931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.859939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.859947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.859958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.859965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.859973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.859994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.860021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.860035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.860056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.860087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.860095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.860102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.860110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.860168 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.860175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.860190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.860197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.860232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.860239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860246 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.860254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.860261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860304 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.860312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.860327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.860348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.860367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.860382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.860389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.860423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.860430 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860438 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.860445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.860452 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.860500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.860514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.860522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.860554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.860569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.860576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.860609 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.860617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860629 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.860641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.860654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.860709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.860724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.860731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.860765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.860779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.860787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.860806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.860828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860836 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.860843 | orchestrator | 2025-05-06 01:04:47.860851 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-05-06 01:04:47.860861 | orchestrator | Tuesday 06 May 2025 01:02:29 +0000 (0:00:02.655) 0:02:31.724 *********** 2025-05-06 01:04:47.860869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.860876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.860950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.860975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.860990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.860998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.861023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861039 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861047 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.861057 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.861074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.861082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861089 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-06 01:04:47.861100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861138 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.861145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.861163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.861171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 01:04:47.861189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.861223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.861240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.861248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.861265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.861280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.861290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 01:04:47.861298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.861316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.861334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.861366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.861387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.861394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.861412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.861427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.861434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.861487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.861494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861502 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-06 01:04:47.861509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.861526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.861534 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.861552 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.861560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-06 01:04:47.861577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-06 01:04:47.861614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.861633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.861640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.861654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.861671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.861679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.861697 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-06 01:04:47.861704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.861712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.861741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.861748 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.861763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.861773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861785 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-06 01:04:47.861792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861799 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:04:47.861807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:04:47.861814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-06 01:04:47.861835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-06 01:04:47.861842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-06 01:04:47.861849 | orchestrator | 2025-05-06 01:04:47.861856 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-06 01:04:47.861863 | orchestrator | Tuesday 06 May 2025 01:02:33 +0000 (0:00:04.095) 0:02:35.819 *********** 2025-05-06 01:04:47.861870 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:04:47.861878 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:04:47.861884 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:04:47.861891 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:04:47.861898 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:04:47.861905 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:04:47.861912 | orchestrator | 2025-05-06 01:04:47.861919 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-05-06 01:04:47.861926 | orchestrator | Tuesday 06 May 2025 01:02:33 +0000 (0:00:00.565) 0:02:36.385 *********** 2025-05-06 01:04:47.861933 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:04:47.861940 | orchestrator | 2025-05-06 01:04:47.861947 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-05-06 01:04:47.861954 | orchestrator | Tuesday 06 May 2025 01:02:36 +0000 (0:00:02.448) 0:02:38.834 *********** 2025-05-06 01:04:47.861961 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:04:47.861968 | orchestrator | 2025-05-06 01:04:47.861975 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-05-06 01:04:47.861982 | orchestrator | Tuesday 06 May 2025 01:02:38 +0000 (0:00:02.357) 0:02:41.191 *********** 2025-05-06 01:04:47.861989 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:04:47.861996 | orchestrator | 2025-05-06 01:04:47.862002 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-06 01:04:47.862009 | orchestrator | Tuesday 06 May 2025 01:03:21 +0000 (0:00:42.904) 0:03:24.096 *********** 2025-05-06 01:04:47.862035 | orchestrator | 2025-05-06 01:04:47.862042 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-06 01:04:47.862056 | orchestrator | Tuesday 06 May 2025 01:03:21 +0000 (0:00:00.073) 0:03:24.170 *********** 2025-05-06 01:04:47.862063 | orchestrator | 2025-05-06 01:04:47.862070 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-06 01:04:47.862076 | orchestrator | Tuesday 06 May 2025 01:03:21 +0000 (0:00:00.247) 0:03:24.417 *********** 2025-05-06 01:04:47.862083 | orchestrator | 2025-05-06 01:04:47.862090 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-06 01:04:47.862097 | orchestrator | Tuesday 06 May 2025 01:03:21 +0000 (0:00:00.057) 0:03:24.474 *********** 2025-05-06 01:04:47.862104 | orchestrator | 2025-05-06 01:04:47.862111 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-06 01:04:47.862118 | orchestrator | Tuesday 06 May 2025 01:03:21 +0000 (0:00:00.054) 0:03:24.529 *********** 2025-05-06 01:04:47.862135 | orchestrator | 2025-05-06 01:04:47.862143 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-06 01:04:47.862150 | orchestrator | Tuesday 06 May 2025 01:03:22 +0000 (0:00:00.050) 0:03:24.579 *********** 2025-05-06 01:04:47.862156 | orchestrator | 2025-05-06 01:04:47.862163 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-05-06 01:04:47.862170 | orchestrator | Tuesday 06 May 2025 01:03:22 +0000 (0:00:00.260) 0:03:24.840 *********** 2025-05-06 01:04:47.862177 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:04:47.862184 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:04:47.862191 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:04:47.862198 | orchestrator | 2025-05-06 01:04:47.862205 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-05-06 01:04:47.862215 | orchestrator | Tuesday 06 May 2025 01:03:47 +0000 (0:00:25.265) 0:03:50.105 *********** 2025-05-06 01:04:50.867864 | orchestrator | changed: [testbed-node-5] 2025-05-06 01:04:50.867969 | orchestrator | changed: [testbed-node-4] 2025-05-06 01:04:50.867988 | orchestrator | changed: [testbed-node-3] 2025-05-06 01:04:50.868003 | orchestrator | 2025-05-06 01:04:50.868018 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 01:04:50.868033 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-06 01:04:50.868049 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-06 01:04:50.868063 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-06 01:04:50.868077 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-06 01:04:50.868091 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-06 01:04:50.868105 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-06 01:04:50.868118 | orchestrator | 2025-05-06 01:04:50.868166 | orchestrator | 2025-05-06 01:04:50.868181 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 01:04:50.868195 | orchestrator | Tuesday 06 May 2025 01:04:45 +0000 (0:00:57.612) 0:04:47.718 *********** 2025-05-06 01:04:50.868314 | orchestrator | =============================================================================== 2025-05-06 01:04:50.868329 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 57.61s 2025-05-06 01:04:50.868343 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 42.90s 2025-05-06 01:04:50.868357 | orchestrator | neutron : Restart neutron-server container ----------------------------- 25.27s 2025-05-06 01:04:50.868399 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.32s 2025-05-06 01:04:50.868440 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.66s 2025-05-06 01:04:50.868455 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.46s 2025-05-06 01:04:50.868469 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 5.50s 2025-05-06 01:04:50.868483 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.49s 2025-05-06 01:04:50.868497 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 5.28s 2025-05-06 01:04:50.868511 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.74s 2025-05-06 01:04:50.868525 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.64s 2025-05-06 01:04:50.868539 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 4.61s 2025-05-06 01:04:50.868553 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 4.46s 2025-05-06 01:04:50.868566 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 4.42s 2025-05-06 01:04:50.868581 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.14s 2025-05-06 01:04:50.868595 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.10s 2025-05-06 01:04:50.868608 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.03s 2025-05-06 01:04:50.868622 | orchestrator | Setting sysctl values --------------------------------------------------- 3.93s 2025-05-06 01:04:50.868636 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.78s 2025-05-06 01:04:50.868650 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.74s 2025-05-06 01:04:50.868664 | orchestrator | 2025-05-06 01:04:47 | INFO  | Task 6304ee9b-0e7b-4a35-871e-14b438fff98c is in state STARTED 2025-05-06 01:04:50.868678 | orchestrator | 2025-05-06 01:04:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:04:50.868708 | orchestrator | 2025-05-06 01:04:50 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:04:50.869539 | orchestrator | 2025-05-06 01:04:50 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:04:50.869569 | orchestrator | 2025-05-06 01:04:50 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:04:50.871442 | orchestrator | 2025-05-06 01:04:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:04:50.873428 | orchestrator | 2025-05-06 01:04:50 | INFO  | Task 6304ee9b-0e7b-4a35-871e-14b438fff98c is in state STARTED 2025-05-06 01:04:53.928749 | orchestrator | 2025-05-06 01:04:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:04:53.928873 | orchestrator | 2025-05-06 01:04:53 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:04:53.929177 | orchestrator | 2025-05-06 01:04:53 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:04:53.930333 | orchestrator | 2025-05-06 01:04:53 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:04:53.931085 | orchestrator | 2025-05-06 01:04:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:04:53.932031 | orchestrator | 2025-05-06 01:04:53 | INFO  | Task 6304ee9b-0e7b-4a35-871e-14b438fff98c is in state STARTED 2025-05-06 01:04:56.967165 | orchestrator | 2025-05-06 01:04:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:04:56.967293 | orchestrator | 2025-05-06 01:04:56 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:04:56.967782 | orchestrator | 2025-05-06 01:04:56 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:04:56.967846 | orchestrator | 2025-05-06 01:04:56 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:04:56.967872 | orchestrator | 2025-05-06 01:04:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:04:56.968465 | orchestrator | 2025-05-06 01:04:56 | INFO  | Task 6304ee9b-0e7b-4a35-871e-14b438fff98c is in state STARTED 2025-05-06 01:04:56.968633 | orchestrator | 2025-05-06 01:04:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:04:59.998948 | orchestrator | 2025-05-06 01:04:59 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:05:00.009163 | orchestrator | 2025-05-06 01:05:00 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:05:00.010806 | orchestrator | 2025-05-06 01:05:00 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state STARTED 2025-05-06 01:05:00.010853 | orchestrator | 2025-05-06 01:05:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:05:00.014248 | orchestrator | 2025-05-06 01:05:00 | INFO  | Task 6304ee9b-0e7b-4a35-871e-14b438fff98c is in state STARTED 2025-05-06 01:05:03.049064 | orchestrator | 2025-05-06 01:05:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:05:03.049345 | orchestrator | 2025-05-06 01:05:03 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:05:03.050388 | orchestrator | 2025-05-06 01:05:03 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:05:03.050442 | orchestrator | 2025-05-06 01:05:03 | INFO  | Task 9303be89-e3ba-4f5c-9edd-c1848e76a6de is in state SUCCESS 2025-05-06 01:05:03.052192 | orchestrator | 2025-05-06 01:05:03.052597 | orchestrator | 2025-05-06 01:05:03.052640 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 01:05:03.052667 | orchestrator | 2025-05-06 01:05:03.052688 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 01:05:03.052715 | orchestrator | Tuesday 06 May 2025 01:03:11 +0000 (0:00:00.379) 0:00:00.379 *********** 2025-05-06 01:05:03.052739 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:05:03.052761 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:05:03.052776 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:05:03.052790 | orchestrator | 2025-05-06 01:05:03.052804 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 01:05:03.052818 | orchestrator | Tuesday 06 May 2025 01:03:11 +0000 (0:00:00.366) 0:00:00.746 *********** 2025-05-06 01:05:03.052832 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-05-06 01:05:03.052846 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-05-06 01:05:03.052860 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-05-06 01:05:03.052874 | orchestrator | 2025-05-06 01:05:03.052888 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-05-06 01:05:03.052902 | orchestrator | 2025-05-06 01:05:03.052916 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-06 01:05:03.052929 | orchestrator | Tuesday 06 May 2025 01:03:12 +0000 (0:00:00.296) 0:00:01.042 *********** 2025-05-06 01:05:03.052943 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:05:03.052957 | orchestrator | 2025-05-06 01:05:03.052971 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-05-06 01:05:03.052985 | orchestrator | Tuesday 06 May 2025 01:03:13 +0000 (0:00:00.745) 0:00:01.787 *********** 2025-05-06 01:05:03.053000 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-05-06 01:05:03.053013 | orchestrator | 2025-05-06 01:05:03.053027 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-05-06 01:05:03.053041 | orchestrator | Tuesday 06 May 2025 01:03:16 +0000 (0:00:03.574) 0:00:05.362 *********** 2025-05-06 01:05:03.053079 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-05-06 01:05:03.053094 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-05-06 01:05:03.053132 | orchestrator | 2025-05-06 01:05:03.053148 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-05-06 01:05:03.053163 | orchestrator | Tuesday 06 May 2025 01:03:23 +0000 (0:00:06.620) 0:00:11.983 *********** 2025-05-06 01:05:03.053177 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-06 01:05:03.053191 | orchestrator | 2025-05-06 01:05:03.053217 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-05-06 01:05:03.053232 | orchestrator | Tuesday 06 May 2025 01:03:27 +0000 (0:00:03.944) 0:00:15.928 *********** 2025-05-06 01:05:03.053250 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-06 01:05:03.053268 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-05-06 01:05:03.053286 | orchestrator | 2025-05-06 01:05:03.053302 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-05-06 01:05:03.053321 | orchestrator | Tuesday 06 May 2025 01:03:31 +0000 (0:00:04.016) 0:00:19.944 *********** 2025-05-06 01:05:03.053338 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-06 01:05:03.053357 | orchestrator | 2025-05-06 01:05:03.053373 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-05-06 01:05:03.053390 | orchestrator | Tuesday 06 May 2025 01:03:34 +0000 (0:00:03.411) 0:00:23.355 *********** 2025-05-06 01:05:03.053407 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-05-06 01:05:03.053424 | orchestrator | 2025-05-06 01:05:03.053440 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-05-06 01:05:03.053457 | orchestrator | Tuesday 06 May 2025 01:03:39 +0000 (0:00:04.518) 0:00:27.873 *********** 2025-05-06 01:05:03.053475 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:05:03.053491 | orchestrator | 2025-05-06 01:05:03.053508 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-05-06 01:05:03.053524 | orchestrator | Tuesday 06 May 2025 01:03:42 +0000 (0:00:03.532) 0:00:31.405 *********** 2025-05-06 01:05:03.053540 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:05:03.053556 | orchestrator | 2025-05-06 01:05:03.053573 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-05-06 01:05:03.053590 | orchestrator | Tuesday 06 May 2025 01:03:46 +0000 (0:00:04.090) 0:00:35.496 *********** 2025-05-06 01:05:03.053604 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:05:03.053618 | orchestrator | 2025-05-06 01:05:03.053632 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-05-06 01:05:03.053645 | orchestrator | Tuesday 06 May 2025 01:03:50 +0000 (0:00:04.197) 0:00:39.693 *********** 2025-05-06 01:05:03.053673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-06 01:05:03.053692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-06 01:05:03.053716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-06 01:05:03.053731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:05:03.053746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:05:03.053774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:05:03.053790 | orchestrator | 2025-05-06 01:05:03.053811 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-05-06 01:05:03.053825 | orchestrator | Tuesday 06 May 2025 01:03:54 +0000 (0:00:03.641) 0:00:43.335 *********** 2025-05-06 01:05:03.053839 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:05:03.053853 | orchestrator | 2025-05-06 01:05:03.053867 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-05-06 01:05:03.053881 | orchestrator | Tuesday 06 May 2025 01:03:55 +0000 (0:00:00.421) 0:00:43.756 *********** 2025-05-06 01:05:03.053894 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:05:03.053908 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:05:03.053922 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:05:03.053935 | orchestrator | 2025-05-06 01:05:03.053949 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-05-06 01:05:03.053962 | orchestrator | Tuesday 06 May 2025 01:03:56 +0000 (0:00:01.546) 0:00:45.303 *********** 2025-05-06 01:05:03.053976 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-06 01:05:03.053990 | orchestrator | 2025-05-06 01:05:03.054003 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-05-06 01:05:03.054059 | orchestrator | Tuesday 06 May 2025 01:03:58 +0000 (0:00:01.810) 0:00:47.113 *********** 2025-05-06 01:05:03.054077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-06 01:05:03.054093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:05:03.054127 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:05:03.054144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-06 01:05:03.054168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:05:03.054190 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:05:03.054206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-06 01:05:03.054222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:05:03.054236 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:05:03.054250 | orchestrator | 2025-05-06 01:05:03.054264 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-05-06 01:05:03.054278 | orchestrator | Tuesday 06 May 2025 01:04:01 +0000 (0:00:02.669) 0:00:49.783 *********** 2025-05-06 01:05:03.054291 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:05:03.054305 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:05:03.054319 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:05:03.054333 | orchestrator | 2025-05-06 01:05:03.054347 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-06 01:05:03.054360 | orchestrator | Tuesday 06 May 2025 01:04:01 +0000 (0:00:00.690) 0:00:50.474 *********** 2025-05-06 01:05:03.054375 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:05:03.054388 | orchestrator | 2025-05-06 01:05:03.054402 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-05-06 01:05:03.054416 | orchestrator | Tuesday 06 May 2025 01:04:03 +0000 (0:00:01.866) 0:00:52.340 *********** 2025-05-06 01:05:03.054458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-06 01:05:03.054489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-06 01:05:03.054505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-06 01:05:03.054520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:05:03.054535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:05:03.054565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:05:03.054587 | orchestrator | 2025-05-06 01:05:03.054601 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-05-06 01:05:03.054615 | orchestrator | Tuesday 06 May 2025 01:04:07 +0000 (0:00:03.791) 0:00:56.132 *********** 2025-05-06 01:05:03.054637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-06 01:05:03.054652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:05:03.054667 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:05:03.054686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-06 01:05:03.054701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:05:03.054721 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:05:03.054745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-06 01:05:03.054767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:05:03.054782 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:05:03.054796 | orchestrator | 2025-05-06 01:05:03.054811 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-05-06 01:05:03.054825 | orchestrator | Tuesday 06 May 2025 01:04:08 +0000 (0:00:01.532) 0:00:57.664 *********** 2025-05-06 01:05:03.054839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-06 01:05:03.054853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:05:03.054874 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:05:03.054899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-06 01:05:03.054925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:05:03.054941 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:05:03.054955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-06 01:05:03.054970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:05:03.054984 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:05:03.054998 | orchestrator | 2025-05-06 01:05:03.055012 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-05-06 01:05:03.055026 | orchestrator | Tuesday 06 May 2025 01:04:10 +0000 (0:00:02.057) 0:00:59.722 *********** 2025-05-06 01:05:03.055050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-06 01:05:03.055072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-06 01:05:03.055094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-06 01:05:03.055131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:05:03.055159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:05:03.055181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:05:03.055196 | orchestrator | 2025-05-06 01:05:03.055210 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-05-06 01:05:03.055224 | orchestrator | Tuesday 06 May 2025 01:04:13 +0000 (0:00:02.392) 0:01:02.115 *********** 2025-05-06 01:05:03.055238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-06 01:05:03.055260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-06 01:05:03.055286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-06 01:05:03.055302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:05:03.055323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:05:03.055337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:05:03.055351 | orchestrator | 2025-05-06 01:05:03.055365 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-05-06 01:05:03.055385 | orchestrator | Tuesday 06 May 2025 01:04:18 +0000 (0:00:04.677) 0:01:06.792 *********** 2025-05-06 01:05:03.055409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-06 01:05:03.055424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:05:03.055448 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:05:03.055463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-06 01:05:03.055478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:05:03.055492 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:05:03.055512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-06 01:05:03.055537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:05:03.055552 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:05:03.055566 | orchestrator | 2025-05-06 01:05:03.055581 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-05-06 01:05:03.055595 | orchestrator | Tuesday 06 May 2025 01:04:18 +0000 (0:00:00.739) 0:01:07.532 *********** 2025-05-06 01:05:03.055609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-06 01:05:03.055636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-06 01:05:03.055651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-06 01:05:03.055672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:05:03.055696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:05:03.055718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:05:03.055733 | orchestrator | 2025-05-06 01:05:03.055747 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-06 01:05:03.055761 | orchestrator | Tuesday 06 May 2025 01:04:21 +0000 (0:00:02.360) 0:01:09.892 *********** 2025-05-06 01:05:03.055775 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:05:03.055788 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:05:03.055802 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:05:03.055816 | orchestrator | 2025-05-06 01:05:03.055830 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-05-06 01:05:03.055843 | orchestrator | Tuesday 06 May 2025 01:04:21 +0000 (0:00:00.295) 0:01:10.188 *********** 2025-05-06 01:05:03.055857 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:05:03.055871 | orchestrator | 2025-05-06 01:05:03.055884 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-05-06 01:05:03.055898 | orchestrator | Tuesday 06 May 2025 01:04:24 +0000 (0:00:02.572) 0:01:12.760 *********** 2025-05-06 01:05:03.055912 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:05:03.055925 | orchestrator | 2025-05-06 01:05:03.055939 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-05-06 01:05:03.055953 | orchestrator | Tuesday 06 May 2025 01:04:26 +0000 (0:00:02.482) 0:01:15.242 *********** 2025-05-06 01:05:03.055966 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:05:03.055980 | orchestrator | 2025-05-06 01:05:03.055994 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-06 01:05:03.056008 | orchestrator | Tuesday 06 May 2025 01:04:42 +0000 (0:00:15.799) 0:01:31.041 *********** 2025-05-06 01:05:03.056021 | orchestrator | 2025-05-06 01:05:03.056035 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-06 01:05:03.056049 | orchestrator | Tuesday 06 May 2025 01:04:42 +0000 (0:00:00.057) 0:01:31.099 *********** 2025-05-06 01:05:03.056062 | orchestrator | 2025-05-06 01:05:03.056076 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-06 01:05:03.056090 | orchestrator | Tuesday 06 May 2025 01:04:42 +0000 (0:00:00.213) 0:01:31.313 *********** 2025-05-06 01:05:03.056104 | orchestrator | 2025-05-06 01:05:03.056272 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-05-06 01:05:03.056294 | orchestrator | Tuesday 06 May 2025 01:04:42 +0000 (0:00:00.066) 0:01:31.379 *********** 2025-05-06 01:05:03.056308 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:05:03.056322 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:05:03.056336 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:05:03.056348 | orchestrator | 2025-05-06 01:05:03.056361 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-05-06 01:05:03.056373 | orchestrator | Tuesday 06 May 2025 01:04:54 +0000 (0:00:12.286) 0:01:43.666 *********** 2025-05-06 01:05:03.056385 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:05:03.056398 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:05:03.056410 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:05:03.056422 | orchestrator | 2025-05-06 01:05:03.056435 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 01:05:03.056468 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-06 01:05:06.088783 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-06 01:05:06.088888 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-06 01:05:06.088906 | orchestrator | 2025-05-06 01:05:06.088921 | orchestrator | 2025-05-06 01:05:06.088937 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 01:05:06.088951 | orchestrator | Tuesday 06 May 2025 01:05:02 +0000 (0:00:07.645) 0:01:51.311 *********** 2025-05-06 01:05:06.088965 | orchestrator | =============================================================================== 2025-05-06 01:05:06.088980 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.80s 2025-05-06 01:05:06.088993 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 12.29s 2025-05-06 01:05:06.089020 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 7.65s 2025-05-06 01:05:06.089035 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.62s 2025-05-06 01:05:06.089049 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.68s 2025-05-06 01:05:06.089063 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.52s 2025-05-06 01:05:06.089077 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.20s 2025-05-06 01:05:06.089090 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.09s 2025-05-06 01:05:06.089161 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.02s 2025-05-06 01:05:06.089180 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.94s 2025-05-06 01:05:06.089246 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.79s 2025-05-06 01:05:06.089420 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 3.64s 2025-05-06 01:05:06.089443 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.58s 2025-05-06 01:05:06.089458 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.53s 2025-05-06 01:05:06.089472 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.41s 2025-05-06 01:05:06.089487 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.67s 2025-05-06 01:05:06.089501 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.57s 2025-05-06 01:05:06.089515 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.48s 2025-05-06 01:05:06.089529 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.39s 2025-05-06 01:05:06.089543 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.36s 2025-05-06 01:05:06.089558 | orchestrator | 2025-05-06 01:05:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:05:06.089572 | orchestrator | 2025-05-06 01:05:03 | INFO  | Task 6304ee9b-0e7b-4a35-871e-14b438fff98c is in state STARTED 2025-05-06 01:05:06.089586 | orchestrator | 2025-05-06 01:05:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:05:06.089617 | orchestrator | 2025-05-06 01:05:06 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:05:06.090166 | orchestrator | 2025-05-06 01:05:06 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:05:06.090200 | orchestrator | 2025-05-06 01:05:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:05:06.090225 | orchestrator | 2025-05-06 01:05:06 | INFO  | Task 6304ee9b-0e7b-4a35-871e-14b438fff98c is in state STARTED 2025-05-06 01:05:06.091193 | orchestrator | 2025-05-06 01:05:06 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:05:09.140942 | orchestrator | 2025-05-06 01:05:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:05:09.141088 | orchestrator | 2025-05-06 01:05:09 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:05:09.142340 | orchestrator | 2025-05-06 01:05:09 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:05:09.148725 | orchestrator | 2025-05-06 01:05:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:05:12.192558 | orchestrator | 2025-05-06 01:05:09 | INFO  | Task 6304ee9b-0e7b-4a35-871e-14b438fff98c is in state STARTED 2025-05-06 01:05:12.192687 | orchestrator | 2025-05-06 01:05:09 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:05:12.192707 | orchestrator | 2025-05-06 01:05:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:05:12.192764 | orchestrator | 2025-05-06 01:05:12 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:05:12.193357 | orchestrator | 2025-05-06 01:05:12 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:05:12.193403 | orchestrator | 2025-05-06 01:05:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:05:12.194375 | orchestrator | 2025-05-06 01:05:12 | INFO  | Task 6304ee9b-0e7b-4a35-871e-14b438fff98c is in state STARTED 2025-05-06 01:05:12.194846 | orchestrator | 2025-05-06 01:05:12 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:05:15.236034 | orchestrator | 2025-05-06 01:05:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:05:15.236247 | orchestrator | 2025-05-06 01:05:15 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:05:15.237403 | orchestrator | 2025-05-06 01:05:15 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:05:15.237475 | orchestrator | 2025-05-06 01:05:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:05:15.238751 | orchestrator | 2025-05-06 01:05:15 | INFO  | Task 6304ee9b-0e7b-4a35-871e-14b438fff98c is in state STARTED 2025-05-06 01:05:15.240342 | orchestrator | 2025-05-06 01:05:15 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:05:18.284892 | orchestrator | 2025-05-06 01:05:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:05:18.285023 | orchestrator | 2025-05-06 01:05:18 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:05:18.286686 | orchestrator | 2025-05-06 01:05:18 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:05:18.287590 | orchestrator | 2025-05-06 01:05:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:05:18.290974 | orchestrator | 2025-05-06 01:05:18 | INFO  | Task 6304ee9b-0e7b-4a35-871e-14b438fff98c is in state STARTED 2025-05-06 01:05:18.292171 | orchestrator | 2025-05-06 01:05:18 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:05:18.292419 | orchestrator | 2025-05-06 01:05:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:05:21.334370 | orchestrator | 2025-05-06 01:05:21 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:05:21.334805 | orchestrator | 2025-05-06 01:05:21 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:05:21.334898 | orchestrator | 2025-05-06 01:05:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:05:21.335780 | orchestrator | 2025-05-06 01:05:21 | INFO  | Task 6304ee9b-0e7b-4a35-871e-14b438fff98c is in state STARTED 2025-05-06 01:05:21.337231 | orchestrator | 2025-05-06 01:05:21 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:05:24.393538 | orchestrator | 2025-05-06 01:05:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:05:24.393686 | orchestrator | 2025-05-06 01:05:24 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:05:24.396578 | orchestrator | 2025-05-06 01:05:24 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:05:24.398227 | orchestrator | 2025-05-06 01:05:24 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:05:24.400424 | orchestrator | 2025-05-06 01:05:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:05:24.401352 | orchestrator | 2025-05-06 01:05:24 | INFO  | Task 6304ee9b-0e7b-4a35-871e-14b438fff98c is in state SUCCESS 2025-05-06 01:05:24.402799 | orchestrator | 2025-05-06 01:05:24 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:05:27.458832 | orchestrator | 2025-05-06 01:05:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:05:27.459008 | orchestrator | 2025-05-06 01:05:27 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:05:27.460639 | orchestrator | 2025-05-06 01:05:27 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:05:27.463610 | orchestrator | 2025-05-06 01:05:27 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:05:27.465780 | orchestrator | 2025-05-06 01:05:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:05:27.468183 | orchestrator | 2025-05-06 01:05:27 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:05:27.468899 | orchestrator | 2025-05-06 01:05:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:05:30.507501 | orchestrator | 2025-05-06 01:05:30 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:05:30.508828 | orchestrator | 2025-05-06 01:05:30 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:05:30.510572 | orchestrator | 2025-05-06 01:05:30 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:05:30.511721 | orchestrator | 2025-05-06 01:05:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:05:30.513117 | orchestrator | 2025-05-06 01:05:30 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:05:33.556233 | orchestrator | 2025-05-06 01:05:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:05:33.556379 | orchestrator | 2025-05-06 01:05:33 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:05:33.556753 | orchestrator | 2025-05-06 01:05:33 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:05:33.557477 | orchestrator | 2025-05-06 01:05:33 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:05:33.558305 | orchestrator | 2025-05-06 01:05:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:05:33.559742 | orchestrator | 2025-05-06 01:05:33 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:05:36.604677 | orchestrator | 2025-05-06 01:05:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:05:36.699907 | orchestrator | 2025-05-06 01:05:36 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:05:39.651945 | orchestrator | 2025-05-06 01:05:36 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:05:39.652122 | orchestrator | 2025-05-06 01:05:36 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:05:39.652145 | orchestrator | 2025-05-06 01:05:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:05:39.652160 | orchestrator | 2025-05-06 01:05:36 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:05:39.652176 | orchestrator | 2025-05-06 01:05:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:05:39.652209 | orchestrator | 2025-05-06 01:05:39 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:05:39.657948 | orchestrator | 2025-05-06 01:05:39 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:05:39.658557 | orchestrator | 2025-05-06 01:05:39 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:05:39.659169 | orchestrator | 2025-05-06 01:05:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:05:39.660156 | orchestrator | 2025-05-06 01:05:39 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:05:42.700913 | orchestrator | 2025-05-06 01:05:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:05:42.701117 | orchestrator | 2025-05-06 01:05:42 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:05:42.701341 | orchestrator | 2025-05-06 01:05:42 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:05:42.701377 | orchestrator | 2025-05-06 01:05:42 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:05:42.701948 | orchestrator | 2025-05-06 01:05:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:05:42.702663 | orchestrator | 2025-05-06 01:05:42 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:05:42.702705 | orchestrator | 2025-05-06 01:05:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:05:45.728515 | orchestrator | 2025-05-06 01:05:45 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:05:45.733715 | orchestrator | 2025-05-06 01:05:45 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:05:45.734166 | orchestrator | 2025-05-06 01:05:45 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:05:45.734858 | orchestrator | 2025-05-06 01:05:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:05:45.735342 | orchestrator | 2025-05-06 01:05:45 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:05:48.764607 | orchestrator | 2025-05-06 01:05:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:05:48.764830 | orchestrator | 2025-05-06 01:05:48 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:05:48.765484 | orchestrator | 2025-05-06 01:05:48 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:05:48.765521 | orchestrator | 2025-05-06 01:05:48 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:05:48.765933 | orchestrator | 2025-05-06 01:05:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:05:48.767892 | orchestrator | 2025-05-06 01:05:48 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:05:51.820700 | orchestrator | 2025-05-06 01:05:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:05:51.820820 | orchestrator | 2025-05-06 01:05:51 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:05:51.821802 | orchestrator | 2025-05-06 01:05:51 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:05:51.821831 | orchestrator | 2025-05-06 01:05:51 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:05:51.821846 | orchestrator | 2025-05-06 01:05:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:05:51.821866 | orchestrator | 2025-05-06 01:05:51 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:05:54.856926 | orchestrator | 2025-05-06 01:05:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:05:54.857133 | orchestrator | 2025-05-06 01:05:54 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state STARTED 2025-05-06 01:05:54.857413 | orchestrator | 2025-05-06 01:05:54 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:05:54.860537 | orchestrator | 2025-05-06 01:05:54 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:05:54.861939 | orchestrator | 2025-05-06 01:05:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:05:54.862407 | orchestrator | 2025-05-06 01:05:54 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:05:57.909999 | orchestrator | 2025-05-06 01:05:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:05:57.910187 | orchestrator | 2025-05-06 01:05:57.910208 | orchestrator | 2025-05-06 01:05:57.910223 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 01:05:57.910238 | orchestrator | 2025-05-06 01:05:57.910266 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 01:05:57.910281 | orchestrator | Tuesday 06 May 2025 01:04:48 +0000 (0:00:00.353) 0:00:00.353 *********** 2025-05-06 01:05:57.910295 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:05:57.910310 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:05:57.910324 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:05:57.910337 | orchestrator | ok: [testbed-manager] 2025-05-06 01:05:57.910351 | orchestrator | ok: [testbed-node-3] 2025-05-06 01:05:57.910364 | orchestrator | ok: [testbed-node-4] 2025-05-06 01:05:57.910378 | orchestrator | ok: [testbed-node-5] 2025-05-06 01:05:57.910391 | orchestrator | 2025-05-06 01:05:57.910405 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 01:05:57.910420 | orchestrator | Tuesday 06 May 2025 01:04:50 +0000 (0:00:01.185) 0:00:01.539 *********** 2025-05-06 01:05:57.910433 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-05-06 01:05:57.910448 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-05-06 01:05:57.910462 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-05-06 01:05:57.910477 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-05-06 01:05:57.910491 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-05-06 01:05:57.910509 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-05-06 01:05:57.910523 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-05-06 01:05:57.910537 | orchestrator | 2025-05-06 01:05:57.910551 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-06 01:05:57.910565 | orchestrator | 2025-05-06 01:05:57.910579 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-05-06 01:05:57.910618 | orchestrator | Tuesday 06 May 2025 01:04:50 +0000 (0:00:00.762) 0:00:02.301 *********** 2025-05-06 01:05:57.910757 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 01:05:57.910781 | orchestrator | 2025-05-06 01:05:57.910798 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-05-06 01:05:57.910815 | orchestrator | Tuesday 06 May 2025 01:04:51 +0000 (0:00:01.088) 0:00:03.389 *********** 2025-05-06 01:05:57.910832 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-05-06 01:05:57.910849 | orchestrator | 2025-05-06 01:05:57.910866 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-05-06 01:05:57.910883 | orchestrator | Tuesday 06 May 2025 01:04:56 +0000 (0:00:04.068) 0:00:07.457 *********** 2025-05-06 01:05:57.910901 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-05-06 01:05:57.910918 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-05-06 01:05:57.910934 | orchestrator | 2025-05-06 01:05:57.910952 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-05-06 01:05:57.910970 | orchestrator | Tuesday 06 May 2025 01:05:03 +0000 (0:00:06.999) 0:00:14.457 *********** 2025-05-06 01:05:57.910985 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-06 01:05:57.911000 | orchestrator | 2025-05-06 01:05:57.911021 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-05-06 01:05:57.911036 | orchestrator | Tuesday 06 May 2025 01:05:06 +0000 (0:00:03.428) 0:00:17.885 *********** 2025-05-06 01:05:57.911082 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-06 01:05:57.911097 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-05-06 01:05:57.911111 | orchestrator | 2025-05-06 01:05:57.911125 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-05-06 01:05:57.911139 | orchestrator | Tuesday 06 May 2025 01:05:10 +0000 (0:00:04.040) 0:00:21.926 *********** 2025-05-06 01:05:57.911153 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-06 01:05:57.911178 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-05-06 01:05:57.911192 | orchestrator | 2025-05-06 01:05:57.911206 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-05-06 01:05:57.911220 | orchestrator | Tuesday 06 May 2025 01:05:17 +0000 (0:00:06.546) 0:00:28.472 *********** 2025-05-06 01:05:57.911234 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-05-06 01:05:57.911247 | orchestrator | 2025-05-06 01:05:57.911261 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 01:05:57.911280 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:05:57.911295 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:05:57.911309 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:05:57.911323 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:05:57.911337 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:05:57.911371 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:05:57.911836 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:05:57.911878 | orchestrator | 2025-05-06 01:05:57.911894 | orchestrator | 2025-05-06 01:05:57.911908 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 01:05:57.911923 | orchestrator | Tuesday 06 May 2025 01:05:22 +0000 (0:00:05.190) 0:00:33.663 *********** 2025-05-06 01:05:57.911938 | orchestrator | =============================================================================== 2025-05-06 01:05:57.911952 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.00s 2025-05-06 01:05:57.911967 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.55s 2025-05-06 01:05:57.911982 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.19s 2025-05-06 01:05:57.911996 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.07s 2025-05-06 01:05:57.912010 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.04s 2025-05-06 01:05:57.912025 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.43s 2025-05-06 01:05:57.912040 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.19s 2025-05-06 01:05:57.912080 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.09s 2025-05-06 01:05:57.912094 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.76s 2025-05-06 01:05:57.912108 | orchestrator | 2025-05-06 01:05:57.912122 | orchestrator | 2025-05-06 01:05:57 | INFO  | Task deeb9f21-7aea-456e-8a44-cc3fb0c104b4 is in state SUCCESS 2025-05-06 01:05:57.912143 | orchestrator | 2025-05-06 01:05:57 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:05:57.912157 | orchestrator | 2025-05-06 01:05:57 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:05:57.912265 | orchestrator | 2025-05-06 01:05:57 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:05:57.915723 | orchestrator | 2025-05-06 01:05:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:05:57.915763 | orchestrator | 2025-05-06 01:05:57 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:06:00.941780 | orchestrator | 2025-05-06 01:05:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:06:00.941918 | orchestrator | 2025-05-06 01:06:00 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:06:00.942488 | orchestrator | 2025-05-06 01:06:00 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:06:00.942536 | orchestrator | 2025-05-06 01:06:00 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:06:00.942924 | orchestrator | 2025-05-06 01:06:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:06:00.943382 | orchestrator | 2025-05-06 01:06:00 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:06:00.943497 | orchestrator | 2025-05-06 01:06:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:06:03.968868 | orchestrator | 2025-05-06 01:06:03 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:06:03.969402 | orchestrator | 2025-05-06 01:06:03 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:06:03.969441 | orchestrator | 2025-05-06 01:06:03 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:06:03.970671 | orchestrator | 2025-05-06 01:06:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:06:03.971154 | orchestrator | 2025-05-06 01:06:03 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:06:06.999135 | orchestrator | 2025-05-06 01:06:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:06:06.999219 | orchestrator | 2025-05-06 01:06:06 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:06:06.999293 | orchestrator | 2025-05-06 01:06:07 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:06:06.999822 | orchestrator | 2025-05-06 01:06:07 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:06:07.000402 | orchestrator | 2025-05-06 01:06:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:06:07.000973 | orchestrator | 2025-05-06 01:06:07 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:06:07.003557 | orchestrator | 2025-05-06 01:06:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:06:10.044867 | orchestrator | 2025-05-06 01:06:10 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:06:10.045231 | orchestrator | 2025-05-06 01:06:10 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:06:10.046549 | orchestrator | 2025-05-06 01:06:10 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:06:10.048446 | orchestrator | 2025-05-06 01:06:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:06:13.085750 | orchestrator | 2025-05-06 01:06:10 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:06:13.085875 | orchestrator | 2025-05-06 01:06:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:06:13.085914 | orchestrator | 2025-05-06 01:06:13 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:06:13.086192 | orchestrator | 2025-05-06 01:06:13 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:06:13.086227 | orchestrator | 2025-05-06 01:06:13 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:06:13.086731 | orchestrator | 2025-05-06 01:06:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:06:13.087274 | orchestrator | 2025-05-06 01:06:13 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:06:16.108571 | orchestrator | 2025-05-06 01:06:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:06:16.108692 | orchestrator | 2025-05-06 01:06:16 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:06:16.109167 | orchestrator | 2025-05-06 01:06:16 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:06:16.109197 | orchestrator | 2025-05-06 01:06:16 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:06:16.109219 | orchestrator | 2025-05-06 01:06:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:06:16.109587 | orchestrator | 2025-05-06 01:06:16 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:06:19.139733 | orchestrator | 2025-05-06 01:06:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:06:19.139852 | orchestrator | 2025-05-06 01:06:19 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:06:19.140893 | orchestrator | 2025-05-06 01:06:19 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:06:19.141836 | orchestrator | 2025-05-06 01:06:19 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:06:19.142220 | orchestrator | 2025-05-06 01:06:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:06:19.142771 | orchestrator | 2025-05-06 01:06:19 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:06:22.167516 | orchestrator | 2025-05-06 01:06:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:06:22.167642 | orchestrator | 2025-05-06 01:06:22 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:06:22.167890 | orchestrator | 2025-05-06 01:06:22 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:06:22.167924 | orchestrator | 2025-05-06 01:06:22 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:06:22.168388 | orchestrator | 2025-05-06 01:06:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:06:22.168911 | orchestrator | 2025-05-06 01:06:22 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:06:25.197119 | orchestrator | 2025-05-06 01:06:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:06:25.197244 | orchestrator | 2025-05-06 01:06:25 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:06:25.199538 | orchestrator | 2025-05-06 01:06:25 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:06:28.226261 | orchestrator | 2025-05-06 01:06:25 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:06:28.226387 | orchestrator | 2025-05-06 01:06:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:06:28.226408 | orchestrator | 2025-05-06 01:06:25 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:06:28.226424 | orchestrator | 2025-05-06 01:06:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:06:28.226455 | orchestrator | 2025-05-06 01:06:28 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:06:28.226971 | orchestrator | 2025-05-06 01:06:28 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:06:28.227104 | orchestrator | 2025-05-06 01:06:28 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:06:28.227394 | orchestrator | 2025-05-06 01:06:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:06:28.227836 | orchestrator | 2025-05-06 01:06:28 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:06:31.272009 | orchestrator | 2025-05-06 01:06:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:06:31.272199 | orchestrator | 2025-05-06 01:06:31 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:06:31.274480 | orchestrator | 2025-05-06 01:06:31 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:06:31.275635 | orchestrator | 2025-05-06 01:06:31 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:06:31.277308 | orchestrator | 2025-05-06 01:06:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:06:31.279111 | orchestrator | 2025-05-06 01:06:31 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:06:34.319289 | orchestrator | 2025-05-06 01:06:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:06:34.319505 | orchestrator | 2025-05-06 01:06:34 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:06:34.320486 | orchestrator | 2025-05-06 01:06:34 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:06:34.320552 | orchestrator | 2025-05-06 01:06:34 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:06:34.320576 | orchestrator | 2025-05-06 01:06:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:06:34.321642 | orchestrator | 2025-05-06 01:06:34 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:06:37.356525 | orchestrator | 2025-05-06 01:06:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:06:37.356743 | orchestrator | 2025-05-06 01:06:37 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:06:37.359125 | orchestrator | 2025-05-06 01:06:37 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:06:37.359175 | orchestrator | 2025-05-06 01:06:37 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:06:37.359589 | orchestrator | 2025-05-06 01:06:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:06:37.360423 | orchestrator | 2025-05-06 01:06:37 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:06:37.360530 | orchestrator | 2025-05-06 01:06:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:06:40.387454 | orchestrator | 2025-05-06 01:06:40 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:06:40.388918 | orchestrator | 2025-05-06 01:06:40 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:06:40.389145 | orchestrator | 2025-05-06 01:06:40 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:06:40.390383 | orchestrator | 2025-05-06 01:06:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:06:43.427599 | orchestrator | 2025-05-06 01:06:40 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:06:43.427851 | orchestrator | 2025-05-06 01:06:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:06:43.427901 | orchestrator | 2025-05-06 01:06:43 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:06:43.428461 | orchestrator | 2025-05-06 01:06:43 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:06:43.428543 | orchestrator | 2025-05-06 01:06:43 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:06:43.428758 | orchestrator | 2025-05-06 01:06:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:06:43.429185 | orchestrator | 2025-05-06 01:06:43 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:06:46.457881 | orchestrator | 2025-05-06 01:06:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:06:46.458069 | orchestrator | 2025-05-06 01:06:46 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:06:46.458294 | orchestrator | 2025-05-06 01:06:46 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:06:46.458328 | orchestrator | 2025-05-06 01:06:46 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:06:46.458871 | orchestrator | 2025-05-06 01:06:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:06:46.459561 | orchestrator | 2025-05-06 01:06:46 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:06:46.461603 | orchestrator | 2025-05-06 01:06:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:06:49.493018 | orchestrator | 2025-05-06 01:06:49 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:06:49.494352 | orchestrator | 2025-05-06 01:06:49 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:06:49.494690 | orchestrator | 2025-05-06 01:06:49 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:06:49.498414 | orchestrator | 2025-05-06 01:06:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:06:52.533091 | orchestrator | 2025-05-06 01:06:49 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:06:52.533197 | orchestrator | 2025-05-06 01:06:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:06:52.533400 | orchestrator | 2025-05-06 01:06:52 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:06:52.534167 | orchestrator | 2025-05-06 01:06:52 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:06:52.534206 | orchestrator | 2025-05-06 01:06:52 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:06:52.534679 | orchestrator | 2025-05-06 01:06:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:06:52.535193 | orchestrator | 2025-05-06 01:06:52 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:06:55.561522 | orchestrator | 2025-05-06 01:06:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:06:55.561662 | orchestrator | 2025-05-06 01:06:55 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:06:55.563546 | orchestrator | 2025-05-06 01:06:55 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:06:55.565488 | orchestrator | 2025-05-06 01:06:55 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:06:55.567200 | orchestrator | 2025-05-06 01:06:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:06:55.568837 | orchestrator | 2025-05-06 01:06:55 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:06:55.569153 | orchestrator | 2025-05-06 01:06:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:06:58.611484 | orchestrator | 2025-05-06 01:06:58 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:06:58.613023 | orchestrator | 2025-05-06 01:06:58 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:06:58.614354 | orchestrator | 2025-05-06 01:06:58 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:06:58.615512 | orchestrator | 2025-05-06 01:06:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:06:58.616795 | orchestrator | 2025-05-06 01:06:58 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:06:58.617093 | orchestrator | 2025-05-06 01:06:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:07:01.650414 | orchestrator | 2025-05-06 01:07:01 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:07:01.651774 | orchestrator | 2025-05-06 01:07:01 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:07:01.651812 | orchestrator | 2025-05-06 01:07:01 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:07:01.651838 | orchestrator | 2025-05-06 01:07:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:07:01.653772 | orchestrator | 2025-05-06 01:07:01 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:07:04.699912 | orchestrator | 2025-05-06 01:07:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:07:04.700065 | orchestrator | 2025-05-06 01:07:04 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:07:04.700324 | orchestrator | 2025-05-06 01:07:04 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:07:04.701074 | orchestrator | 2025-05-06 01:07:04 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:07:04.702195 | orchestrator | 2025-05-06 01:07:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:07:04.702716 | orchestrator | 2025-05-06 01:07:04 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:07:04.702810 | orchestrator | 2025-05-06 01:07:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:07:07.743090 | orchestrator | 2025-05-06 01:07:07 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:07:07.744674 | orchestrator | 2025-05-06 01:07:07 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:07:07.746082 | orchestrator | 2025-05-06 01:07:07 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:07:07.747806 | orchestrator | 2025-05-06 01:07:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:07:07.750098 | orchestrator | 2025-05-06 01:07:07 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:07:10.790418 | orchestrator | 2025-05-06 01:07:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:07:10.790543 | orchestrator | 2025-05-06 01:07:10 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:07:10.795210 | orchestrator | 2025-05-06 01:07:10 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:07:10.797278 | orchestrator | 2025-05-06 01:07:10 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:07:10.797309 | orchestrator | 2025-05-06 01:07:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:07:10.797330 | orchestrator | 2025-05-06 01:07:10 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:07:13.839684 | orchestrator | 2025-05-06 01:07:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:07:13.839942 | orchestrator | 2025-05-06 01:07:13 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:07:16.899168 | orchestrator | 2025-05-06 01:07:13 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:07:16.899273 | orchestrator | 2025-05-06 01:07:13 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:07:16.899284 | orchestrator | 2025-05-06 01:07:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:07:16.899292 | orchestrator | 2025-05-06 01:07:13 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:07:16.899300 | orchestrator | 2025-05-06 01:07:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:07:16.899322 | orchestrator | 2025-05-06 01:07:16 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:07:16.900931 | orchestrator | 2025-05-06 01:07:16 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:07:16.902648 | orchestrator | 2025-05-06 01:07:16 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:07:16.904904 | orchestrator | 2025-05-06 01:07:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:07:16.907124 | orchestrator | 2025-05-06 01:07:16 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:07:16.907603 | orchestrator | 2025-05-06 01:07:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:07:19.943421 | orchestrator | 2025-05-06 01:07:19 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:07:19.946909 | orchestrator | 2025-05-06 01:07:19 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state STARTED 2025-05-06 01:07:19.949198 | orchestrator | 2025-05-06 01:07:19 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:07:19.951463 | orchestrator | 2025-05-06 01:07:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:07:19.954250 | orchestrator | 2025-05-06 01:07:19 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:07:19.955025 | orchestrator | 2025-05-06 01:07:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:07:22.989947 | orchestrator | 2025-05-06 01:07:22 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:07:23.001716 | orchestrator | 2025-05-06 01:07:23.001792 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-05-06 01:07:23.001819 | orchestrator | 2025-05-06 01:07:23.001844 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-05-06 01:07:23.001869 | orchestrator | Tuesday 06 May 2025 00:59:57 +0000 (0:00:00.174) 0:00:00.174 *********** 2025-05-06 01:07:23.001894 | orchestrator | changed: [localhost] 2025-05-06 01:07:23.001919 | orchestrator | 2025-05-06 01:07:23.001944 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-05-06 01:07:23.001997 | orchestrator | Tuesday 06 May 2025 00:59:58 +0000 (0:00:00.541) 0:00:00.715 *********** 2025-05-06 01:07:23.002543 | orchestrator | 2025-05-06 01:07:23.002568 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-06 01:07:23.002585 | orchestrator | 2025-05-06 01:07:23.002601 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-06 01:07:23.002616 | orchestrator | 2025-05-06 01:07:23.002638 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-06 01:07:23.002653 | orchestrator | 2025-05-06 01:07:23.002666 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-06 01:07:23.002680 | orchestrator | 2025-05-06 01:07:23.002694 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-06 01:07:23.002717 | orchestrator | 2025-05-06 01:07:23.002744 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-06 01:07:23.002769 | orchestrator | 2025-05-06 01:07:23.002796 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-05-06 01:07:23.002821 | orchestrator | changed: [localhost] 2025-05-06 01:07:23.002847 | orchestrator | 2025-05-06 01:07:23.002889 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-05-06 01:07:23.002917 | orchestrator | Tuesday 06 May 2025 01:05:52 +0000 (0:05:53.900) 0:05:54.616 *********** 2025-05-06 01:07:23.002933 | orchestrator | changed: [localhost] 2025-05-06 01:07:23.002948 | orchestrator | 2025-05-06 01:07:23.002985 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 01:07:23.003000 | orchestrator | 2025-05-06 01:07:23.003014 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 01:07:23.003028 | orchestrator | Tuesday 06 May 2025 01:05:55 +0000 (0:00:03.770) 0:05:58.386 *********** 2025-05-06 01:07:23.003042 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:07:23.003056 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:07:23.003090 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:07:23.003105 | orchestrator | 2025-05-06 01:07:23.003119 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 01:07:23.003133 | orchestrator | Tuesday 06 May 2025 01:05:56 +0000 (0:00:00.281) 0:05:58.668 *********** 2025-05-06 01:07:23.003147 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-05-06 01:07:23.003161 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-05-06 01:07:23.003175 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-05-06 01:07:23.003189 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-05-06 01:07:23.003203 | orchestrator | 2025-05-06 01:07:23.003217 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-05-06 01:07:23.003231 | orchestrator | skipping: no hosts matched 2025-05-06 01:07:23.003246 | orchestrator | 2025-05-06 01:07:23.003259 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 01:07:23.003273 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:07:23.003289 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:07:23.003304 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:07:23.003318 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:07:23.003332 | orchestrator | 2025-05-06 01:07:23.003346 | orchestrator | 2025-05-06 01:07:23.003360 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 01:07:23.003373 | orchestrator | Tuesday 06 May 2025 01:05:56 +0000 (0:00:00.347) 0:05:59.016 *********** 2025-05-06 01:07:23.003387 | orchestrator | =============================================================================== 2025-05-06 01:07:23.003401 | orchestrator | Download ironic-agent initramfs --------------------------------------- 353.90s 2025-05-06 01:07:23.003415 | orchestrator | Download ironic-agent kernel -------------------------------------------- 3.77s 2025-05-06 01:07:23.003429 | orchestrator | Ensure the destination directory exists --------------------------------- 0.54s 2025-05-06 01:07:23.003443 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2025-05-06 01:07:23.003456 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2025-05-06 01:07:23.003470 | orchestrator | 2025-05-06 01:07:23.003483 | orchestrator | 2025-05-06 01:07:23.003497 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 01:07:23.003511 | orchestrator | 2025-05-06 01:07:23.003525 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 01:07:23.003538 | orchestrator | Tuesday 06 May 2025 01:03:22 +0000 (0:00:00.323) 0:00:00.323 *********** 2025-05-06 01:07:23.003552 | orchestrator | ok: [testbed-manager] 2025-05-06 01:07:23.003566 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:07:23.003580 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:07:23.003594 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:07:23.003607 | orchestrator | ok: [testbed-node-3] 2025-05-06 01:07:23.003621 | orchestrator | ok: [testbed-node-4] 2025-05-06 01:07:23.003635 | orchestrator | ok: [testbed-node-5] 2025-05-06 01:07:23.003648 | orchestrator | 2025-05-06 01:07:23.003662 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 01:07:23.003676 | orchestrator | Tuesday 06 May 2025 01:03:24 +0000 (0:00:01.165) 0:00:01.489 *********** 2025-05-06 01:07:23.003702 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-06 01:07:23.003717 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-06 01:07:23.003730 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-06 01:07:23.003744 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-06 01:07:23.003765 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-06 01:07:23.003779 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-06 01:07:23.003793 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-06 01:07:23.003807 | orchestrator | 2025-05-06 01:07:23.003821 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-06 01:07:23.003835 | orchestrator | 2025-05-06 01:07:23.003848 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-06 01:07:23.003862 | orchestrator | Tuesday 06 May 2025 01:03:25 +0000 (0:00:01.246) 0:00:02.736 *********** 2025-05-06 01:07:23.003876 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 01:07:23.003891 | orchestrator | 2025-05-06 01:07:23.003905 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-06 01:07:23.003919 | orchestrator | Tuesday 06 May 2025 01:03:26 +0000 (0:00:01.077) 0:00:03.813 *********** 2025-05-06 01:07:23.003934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 01:07:23.003953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 01:07:23.003991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 01:07:23.004016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 01:07:23.004069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 01:07:23.004086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 01:07:23.004101 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-06 01:07:23.004116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.004131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.004164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.004187 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.004201 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.004216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.004231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.004245 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.004260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.004274 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.004305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.004327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.004342 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.004356 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.004371 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.004385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.004410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.004425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.004451 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.004467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 01:07:23.004482 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.004497 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.004517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 01:07:23.004543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 01:07:23.004564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.004579 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.004603 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 01:07:23.004618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.004633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 01:07:23.004667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 01:07:23.004682 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.004706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.004722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.004736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.004751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.004771 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-06 01:07:23.004803 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'imag2025-05-06 01:07:22 | INFO  | Task ba91024c-2231-4869-b653-226ca2ba1790 is in state SUCCESS 2025-05-06 01:07:23.004820 | orchestrator | e': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 01:07:23.004835 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.004849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.004864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.004878 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.004899 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.004914 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.004935 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.004950 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.005042 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.005061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.005075 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.005098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.005113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.005136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 01:07:23.005164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 01:07:23.005179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.005193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 01:07:23.005215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.005246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 01:07:23.005262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 01:07:23.005277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 01:07:23.005298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.005313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.005327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.005348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.005363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.005388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.005404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.005419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.005439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.005454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.005469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.005498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.005512 | orchestrator | 2025-05-06 01:07:23.005524 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-06 01:07:23.005537 | orchestrator | Tuesday 06 May 2025 01:03:30 +0000 (0:00:03.704) 0:00:07.517 *********** 2025-05-06 01:07:23.005550 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 01:07:23.005562 | orchestrator | 2025-05-06 01:07:23.005575 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-06 01:07:23.005587 | orchestrator | Tuesday 06 May 2025 01:03:31 +0000 (0:00:01.136) 0:00:08.654 *********** 2025-05-06 01:07:23.005600 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-06 01:07:23.005618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.005631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.005644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.005657 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.005682 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.005696 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.005718 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.005732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.005750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.005763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.005776 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.005789 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.005817 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.005831 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.005844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.005862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.005875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.005888 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.005909 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-06 01:07:23.006691 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.006794 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.006837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.006854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.006869 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.006883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.006898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.006946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.007018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.007037 | orchestrator | 2025-05-06 01:07:23.007062 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-06 01:07:23.007077 | orchestrator | Tuesday 06 May 2025 01:03:36 +0000 (0:00:05.445) 0:00:14.099 *********** 2025-05-06 01:07:23.007092 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 01:07:23.007108 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-06 01:07:23.007123 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.007139 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 01:07:23.007169 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.007193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-06 01:07:23.007216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.007231 | orchestrator | skipping: [testbed-manager] 2025-05-06 01:07:23.007250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.007267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.007285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.007302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-06 01:07:23.007318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.007353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.007377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.007394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.007411 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:07:23.007428 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:07:23.007445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-06 01:07:23.007462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.007478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.007505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.007523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.007546 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:07:23.007570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-06 01:07:23.007587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.007601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.007616 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:07:23.007631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-06 01:07:23.007645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.007659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.007684 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:07:23.007699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-06 01:07:23.007725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.007741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.007755 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:07:23.007769 | orchestrator | 2025-05-06 01:07:23.007784 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-06 01:07:23.007798 | orchestrator | Tuesday 06 May 2025 01:03:38 +0000 (0:00:01.969) 0:00:16.068 *********** 2025-05-06 01:07:23.007813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-06 01:07:23.007827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.007841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.007887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.007915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.007943 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 01:07:23.007979 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-06 01:07:23.007996 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.008011 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 01:07:23.008038 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.008053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-06 01:07:23.008081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.008103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.008118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.008132 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:07:23.008147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.008166 | orchestrator | skipping: [testbed-manager] 2025-05-06 01:07:23.008181 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:07:23.008200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-06 01:07:23.008216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.008240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.008262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.008282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.008297 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:07:23.008311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-06 01:07:23.008325 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.008340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.008354 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:07:23.008368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-06 01:07:23.008392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.008414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.008429 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:07:23.008443 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-06 01:07:23.008463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.008479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.008493 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:07:23.008507 | orchestrator | 2025-05-06 01:07:23.008521 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-06 01:07:23.008535 | orchestrator | Tuesday 06 May 2025 01:03:41 +0000 (0:00:02.670) 0:00:18.739 *********** 2025-05-06 01:07:23.008550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 01:07:23.008574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 01:07:23.008596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 01:07:23.008616 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 01:07:23.008632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 01:07:23.008656 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-06 01:07:23.008672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.008698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.008714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 01:07:23.008733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.008748 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.008762 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.008789 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.008803 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.008825 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.008839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.008854 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.008873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.008888 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.008903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.008927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.008943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.008988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.009005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.009020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.009116 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.009136 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.009151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 01:07:23.009174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 01:07:23.009190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.009207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.009268 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.009286 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.009302 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 01:07:23.009325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 01:07:23.009353 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 01:07:23.009398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 01:07:23.009415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.009430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.009452 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.009467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.009481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.009508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.009555 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-06 01:07:23.009572 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 01:07:23.009596 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.009611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.009626 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.009652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.009668 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.009712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.009729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.009752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 01:07:23.009767 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.009792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 01:07:23.009808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.009853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 01:07:23.009877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 01:07:23.009901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.009917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.009932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 01:07:23.010073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 01:07:23.010112 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.010142 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.010158 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.010173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.010187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.010202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.010250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.010267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.010293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.010318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.010334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.010349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.010364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.010379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.010423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.010447 | orchestrator | 2025-05-06 01:07:23.010462 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-06 01:07:23.010477 | orchestrator | Tuesday 06 May 2025 01:03:47 +0000 (0:00:06.367) 0:00:25.106 *********** 2025-05-06 01:07:23.010491 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-06 01:07:23.010505 | orchestrator | 2025-05-06 01:07:23.010519 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-06 01:07:23.010533 | orchestrator | Tuesday 06 May 2025 01:03:48 +0000 (0:00:00.582) 0:00:25.688 *********** 2025-05-06 01:07:23.010547 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1337099, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9879165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.010573 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1337099, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9879165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.010588 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1337099, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9879165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.010603 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1337099, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9879165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.010618 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1337099, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9879165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.010663 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1337099, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9879165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.010686 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1337115, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9899166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.010712 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1337115, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9899166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.010728 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1337115, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9899166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.010742 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1337115, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9899166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.010757 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1337115, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9899166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.010772 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1337099, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9879165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 01:07:23.010786 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1337115, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9899166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.010848 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1337102, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9879165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.010867 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1337102, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9879165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.010882 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1337102, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9879165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.010896 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1337102, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9879165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.010910 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1337102, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9879165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.010925 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1337102, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9879165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.010946 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1337110, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9899166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011110 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1337110, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9899166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011139 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1337110, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9899166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011155 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1337110, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9899166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011169 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1337110, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9899166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011184 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1337146, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9959166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011198 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1337146, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9959166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011221 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1337110, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9899166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011278 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1337146, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9959166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011297 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1337146, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9959166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011312 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1337146, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9959166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011326 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1337115, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9899166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 01:07:23.011341 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1337122, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9909165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011355 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1337122, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9909165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011377 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1337122, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9909165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011432 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1337122, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9909165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011451 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1337146, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9959166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011466 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1337122, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9909165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011481 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1337107, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9889166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011495 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1337107, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9889166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011517 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1337107, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9889166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011543 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1337122, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9909165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011583 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1337107, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9889166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011599 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1337107, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9889166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011612 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1337119, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9899166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011624 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1337107, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9889166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011637 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1337119, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9899166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011656 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1337119, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9899166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011680 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1337119, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9899166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011720 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1337102, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9879165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 01:07:23.011735 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1337144, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9949167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011748 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1337119, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9899166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011760 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1337119, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9899166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011773 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1337144, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9949167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011804 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1337144, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9949167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011818 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1337144, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9949167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011857 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1337104, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9889166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011872 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1337144, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9949167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011885 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1337144, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9949167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011898 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1337104, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9889166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011912 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1337104, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9889166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011942 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1337104, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9889166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.011956 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1337104, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9889166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.012019 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1337131, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9919167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.012035 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:07:23.012048 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1337131, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9919167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.012060 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:07:23.012073 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1337104, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9889166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.012086 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1337131, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9919167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.012106 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:07:23.012129 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1337131, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9919167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.012143 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:07:23.012155 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1337110, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9899166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 01:07:23.012168 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1337131, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9919167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.012181 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:07:23.012220 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1337131, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9919167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-06 01:07:23.012235 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:07:23.012248 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1337146, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9959166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 01:07:23.012261 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1337122, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9909165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 01:07:23.012280 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1337107, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9889166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 01:07:23.012302 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1337119, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9899166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 01:07:23.012316 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1337144, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9949167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 01:07:23.012329 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1337104, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9889166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 01:07:23.012368 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1337131, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9919167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-06 01:07:23.012383 | orchestrator | 2025-05-06 01:07:23.012396 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-06 01:07:23.012409 | orchestrator | Tuesday 06 May 2025 01:04:28 +0000 (0:00:39.988) 0:01:05.677 *********** 2025-05-06 01:07:23.012421 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-06 01:07:23.012434 | orchestrator | 2025-05-06 01:07:23.012446 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-06 01:07:23.012459 | orchestrator | Tuesday 06 May 2025 01:04:28 +0000 (0:00:00.399) 0:01:06.077 *********** 2025-05-06 01:07:23.012471 | orchestrator | [WARNING]: Skipped 2025-05-06 01:07:23.012483 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-06 01:07:23.012496 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-06 01:07:23.012509 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-06 01:07:23.012529 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-06 01:07:23.012541 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-06 01:07:23.012554 | orchestrator | [WARNING]: Skipped 2025-05-06 01:07:23.012567 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-06 01:07:23.012579 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-06 01:07:23.012591 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-06 01:07:23.012604 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-06 01:07:23.012616 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-06 01:07:23.012629 | orchestrator | [WARNING]: Skipped 2025-05-06 01:07:23.012642 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-06 01:07:23.012654 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-06 01:07:23.012667 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-06 01:07:23.012679 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-06 01:07:23.012692 | orchestrator | [WARNING]: Skipped 2025-05-06 01:07:23.012704 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-06 01:07:23.012717 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-06 01:07:23.012729 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-06 01:07:23.012741 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-06 01:07:23.012754 | orchestrator | [WARNING]: Skipped 2025-05-06 01:07:23.012766 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-06 01:07:23.012779 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-06 01:07:23.012791 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-06 01:07:23.012804 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-06 01:07:23.012816 | orchestrator | [WARNING]: Skipped 2025-05-06 01:07:23.012828 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-06 01:07:23.012841 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-06 01:07:23.012854 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-06 01:07:23.012866 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-06 01:07:23.012879 | orchestrator | [WARNING]: Skipped 2025-05-06 01:07:23.012891 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-06 01:07:23.012903 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-06 01:07:23.012916 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-06 01:07:23.012928 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-06 01:07:23.012941 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-06 01:07:23.012953 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-06 01:07:23.012985 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-06 01:07:23.012998 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-06 01:07:23.013011 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-06 01:07:23.013023 | orchestrator | 2025-05-06 01:07:23.013036 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-06 01:07:23.013049 | orchestrator | Tuesday 06 May 2025 01:04:29 +0000 (0:00:01.271) 0:01:07.348 *********** 2025-05-06 01:07:23.013062 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-06 01:07:23.013074 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:07:23.013087 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-06 01:07:23.013099 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:07:23.013112 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-06 01:07:23.013131 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:07:23.013174 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-06 01:07:23.013189 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:07:23.013202 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-06 01:07:23.013215 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:07:23.013232 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-06 01:07:23.013246 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:07:23.013259 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-06 01:07:23.013271 | orchestrator | 2025-05-06 01:07:23.013284 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-06 01:07:23.013297 | orchestrator | Tuesday 06 May 2025 01:04:47 +0000 (0:00:18.045) 0:01:25.394 *********** 2025-05-06 01:07:23.013310 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-06 01:07:23.013323 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:07:23.013336 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-06 01:07:23.013348 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:07:23.013361 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-06 01:07:23.013374 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:07:23.013386 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-06 01:07:23.013399 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:07:23.013411 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-06 01:07:23.013424 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:07:23.013436 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-06 01:07:23.013449 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:07:23.013461 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-06 01:07:23.013474 | orchestrator | 2025-05-06 01:07:23.013491 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-06 01:07:23.013505 | orchestrator | Tuesday 06 May 2025 01:04:52 +0000 (0:00:04.786) 0:01:30.181 *********** 2025-05-06 01:07:23.013517 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-06 01:07:23.013530 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:07:23.013543 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-06 01:07:23.013555 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:07:23.013568 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-06 01:07:23.013581 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:07:23.013593 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-06 01:07:23.013606 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:07:23.013619 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-06 01:07:23.013631 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:07:23.013644 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-06 01:07:23.013666 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:07:23.013679 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-06 01:07:23.013691 | orchestrator | 2025-05-06 01:07:23.013704 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-06 01:07:23.013716 | orchestrator | Tuesday 06 May 2025 01:04:56 +0000 (0:00:03.462) 0:01:33.643 *********** 2025-05-06 01:07:23.013729 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-06 01:07:23.013741 | orchestrator | 2025-05-06 01:07:23.013754 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-06 01:07:23.013767 | orchestrator | Tuesday 06 May 2025 01:04:56 +0000 (0:00:00.445) 0:01:34.088 *********** 2025-05-06 01:07:23.013779 | orchestrator | skipping: [testbed-manager] 2025-05-06 01:07:23.013792 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:07:23.013804 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:07:23.013817 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:07:23.013829 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:07:23.013842 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:07:23.013854 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:07:23.013866 | orchestrator | 2025-05-06 01:07:23.013879 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-06 01:07:23.013891 | orchestrator | Tuesday 06 May 2025 01:04:57 +0000 (0:00:00.750) 0:01:34.838 *********** 2025-05-06 01:07:23.013904 | orchestrator | skipping: [testbed-manager] 2025-05-06 01:07:23.013916 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:07:23.013929 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:07:23.013941 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:07:23.013953 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:07:23.013991 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:07:23.014014 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:07:23.014063 | orchestrator | 2025-05-06 01:07:23.014085 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-06 01:07:23.014098 | orchestrator | Tuesday 06 May 2025 01:05:00 +0000 (0:00:03.452) 0:01:38.291 *********** 2025-05-06 01:07:23.014111 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-06 01:07:23.014124 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:07:23.014137 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-06 01:07:23.014151 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:07:23.014164 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-06 01:07:23.014176 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:07:23.014188 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-06 01:07:23.014201 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:07:23.014213 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-06 01:07:23.014225 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:07:23.014237 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-06 01:07:23.014250 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:07:23.014262 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-06 01:07:23.014274 | orchestrator | skipping: [testbed-manager] 2025-05-06 01:07:23.014287 | orchestrator | 2025-05-06 01:07:23.014299 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-06 01:07:23.014312 | orchestrator | Tuesday 06 May 2025 01:05:03 +0000 (0:00:02.342) 0:01:40.634 *********** 2025-05-06 01:07:23.014324 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-06 01:07:23.014336 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:07:23.014356 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-06 01:07:23.014368 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:07:23.014381 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-06 01:07:23.014393 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:07:23.014406 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-06 01:07:23.014418 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:07:23.014431 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-06 01:07:23.014443 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:07:23.014455 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-06 01:07:23.014468 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:07:23.014481 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-06 01:07:23.014493 | orchestrator | 2025-05-06 01:07:23.014506 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-06 01:07:23.014518 | orchestrator | Tuesday 06 May 2025 01:05:06 +0000 (0:00:03.366) 0:01:44.000 *********** 2025-05-06 01:07:23.014530 | orchestrator | [WARNING]: Skipped 2025-05-06 01:07:23.014543 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-06 01:07:23.014555 | orchestrator | due to this access issue: 2025-05-06 01:07:23.014568 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-06 01:07:23.014580 | orchestrator | not a directory 2025-05-06 01:07:23.014599 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-06 01:07:23.014611 | orchestrator | 2025-05-06 01:07:23.014624 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-06 01:07:23.014636 | orchestrator | Tuesday 06 May 2025 01:05:08 +0000 (0:00:01.478) 0:01:45.479 *********** 2025-05-06 01:07:23.014648 | orchestrator | skipping: [testbed-manager] 2025-05-06 01:07:23.014660 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:07:23.014673 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:07:23.014685 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:07:23.014698 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:07:23.014710 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:07:23.014722 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:07:23.014734 | orchestrator | 2025-05-06 01:07:23.014747 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-06 01:07:23.014759 | orchestrator | Tuesday 06 May 2025 01:05:08 +0000 (0:00:00.891) 0:01:46.371 *********** 2025-05-06 01:07:23.014772 | orchestrator | skipping: [testbed-manager] 2025-05-06 01:07:23.014784 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:07:23.014796 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:07:23.014809 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:07:23.014821 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:07:23.014833 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:07:23.014846 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:07:23.014858 | orchestrator | 2025-05-06 01:07:23.014871 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-05-06 01:07:23.014883 | orchestrator | Tuesday 06 May 2025 01:05:09 +0000 (0:00:00.893) 0:01:47.264 *********** 2025-05-06 01:07:23.014896 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-06 01:07:23.014908 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:07:23.014927 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-06 01:07:23.014940 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:07:23.014952 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-06 01:07:23.015020 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:07:23.015034 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-06 01:07:23.015047 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:07:23.015059 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-06 01:07:23.015072 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:07:23.015084 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-06 01:07:23.015096 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:07:23.015109 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-06 01:07:23.015121 | orchestrator | skipping: [testbed-manager] 2025-05-06 01:07:23.015134 | orchestrator | 2025-05-06 01:07:23.015146 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-05-06 01:07:23.015158 | orchestrator | Tuesday 06 May 2025 01:05:12 +0000 (0:00:03.128) 0:01:50.392 *********** 2025-05-06 01:07:23.015171 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-06 01:07:23.015183 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-06 01:07:23.015196 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:07:23.015208 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:07:23.015220 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-06 01:07:23.015233 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:07:23.015245 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-06 01:07:23.015257 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:07:23.015270 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-06 01:07:23.015282 | orchestrator | skipping: [testbed-manager] 2025-05-06 01:07:23.015294 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-06 01:07:23.015307 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:07:23.015324 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-06 01:07:23.015337 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:07:23.015349 | orchestrator | 2025-05-06 01:07:23.015361 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-06 01:07:23.015374 | orchestrator | Tuesday 06 May 2025 01:05:15 +0000 (0:00:02.214) 0:01:52.607 *********** 2025-05-06 01:07:23.015387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 01:07:23.015415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 01:07:23.015442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 01:07:23.015457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 01:07:23.015470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 01:07:23.015483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-06 01:07:23.015504 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-06 01:07:23.015528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.015540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.015551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.015561 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.015572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.015582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.015593 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.015615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.015631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.015642 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.015653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.015664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.015674 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-06 01:07:23.015685 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.015702 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.015718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.015733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.015744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.015754 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.015765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 01:07:23.015784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 01:07:23.015800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.015811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.015826 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.015837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 01:07:23.015859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 01:07:23.015875 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.015885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.015896 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.015911 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.015922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 01:07:23.015941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 01:07:23.015971 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.015984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.015994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.016010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.016021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.016040 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-06 01:07:23.016052 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 01:07:23.016068 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.016079 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.016094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.016113 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.016125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.016135 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.016151 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.016161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.016172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 01:07:23.016196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 01:07:23.016208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.016219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 01:07:23.016234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 01:07:23.016254 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.016269 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.016281 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.016291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-06 01:07:23.016303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-06 01:07:23.016326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-06 01:07:23.016337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.016352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.016363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.016373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.016384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.016400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.016410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.016428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.016440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-06 01:07:23.016455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.016466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-06 01:07:23.016477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-06 01:07:23.016494 | orchestrator | 2025-05-06 01:07:23.016505 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-06 01:07:23.016515 | orchestrator | Tuesday 06 May 2025 01:05:19 +0000 (0:00:04.520) 0:01:57.127 *********** 2025-05-06 01:07:23.016525 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-06 01:07:23.016536 | orchestrator | 2025-05-06 01:07:23.016546 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-06 01:07:23.016557 | orchestrator | Tuesday 06 May 2025 01:05:22 +0000 (0:00:02.436) 0:01:59.563 *********** 2025-05-06 01:07:23.016567 | orchestrator | 2025-05-06 01:07:23.016577 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-06 01:07:23.016587 | orchestrator | Tuesday 06 May 2025 01:05:22 +0000 (0:00:00.056) 0:01:59.619 *********** 2025-05-06 01:07:23.016597 | orchestrator | 2025-05-06 01:07:23.016611 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-06 01:07:23.016621 | orchestrator | Tuesday 06 May 2025 01:05:22 +0000 (0:00:00.167) 0:01:59.787 *********** 2025-05-06 01:07:23.016631 | orchestrator | 2025-05-06 01:07:23.016641 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-06 01:07:23.016652 | orchestrator | Tuesday 06 May 2025 01:05:22 +0000 (0:00:00.052) 0:01:59.839 *********** 2025-05-06 01:07:23.016662 | orchestrator | 2025-05-06 01:07:23.016672 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-06 01:07:23.016682 | orchestrator | Tuesday 06 May 2025 01:05:22 +0000 (0:00:00.048) 0:01:59.888 *********** 2025-05-06 01:07:23.016692 | orchestrator | 2025-05-06 01:07:23.016702 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-06 01:07:23.016712 | orchestrator | Tuesday 06 May 2025 01:05:22 +0000 (0:00:00.047) 0:01:59.935 *********** 2025-05-06 01:07:23.016722 | orchestrator | 2025-05-06 01:07:23.016732 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-06 01:07:23.016743 | orchestrator | Tuesday 06 May 2025 01:05:22 +0000 (0:00:00.170) 0:02:00.106 *********** 2025-05-06 01:07:23.016756 | orchestrator | 2025-05-06 01:07:23.016766 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-05-06 01:07:23.016776 | orchestrator | Tuesday 06 May 2025 01:05:22 +0000 (0:00:00.067) 0:02:00.173 *********** 2025-05-06 01:07:23.016786 | orchestrator | changed: [testbed-manager] 2025-05-06 01:07:23.016796 | orchestrator | 2025-05-06 01:07:23.016806 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-05-06 01:07:23.016816 | orchestrator | Tuesday 06 May 2025 01:05:40 +0000 (0:00:17.281) 0:02:17.455 *********** 2025-05-06 01:07:23.016827 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:07:23.016837 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:07:23.016847 | orchestrator | changed: [testbed-node-5] 2025-05-06 01:07:23.016857 | orchestrator | changed: [testbed-manager] 2025-05-06 01:07:23.016867 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:07:23.016877 | orchestrator | changed: [testbed-node-3] 2025-05-06 01:07:23.016890 | orchestrator | changed: [testbed-node-4] 2025-05-06 01:07:23.016900 | orchestrator | 2025-05-06 01:07:23.016910 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-05-06 01:07:23.016921 | orchestrator | Tuesday 06 May 2025 01:05:58 +0000 (0:00:17.994) 0:02:35.449 *********** 2025-05-06 01:07:23.016931 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:07:23.016941 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:07:23.016951 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:07:23.016977 | orchestrator | 2025-05-06 01:07:23.016989 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-05-06 01:07:23.016999 | orchestrator | Tuesday 06 May 2025 01:06:12 +0000 (0:00:14.393) 0:02:49.843 *********** 2025-05-06 01:07:23.017014 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:07:23.017024 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:07:23.017034 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:07:23.017044 | orchestrator | 2025-05-06 01:07:23.017054 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-05-06 01:07:23.017064 | orchestrator | Tuesday 06 May 2025 01:06:19 +0000 (0:00:07.510) 0:02:57.354 *********** 2025-05-06 01:07:23.017074 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:07:23.017089 | orchestrator | changed: [testbed-node-4] 2025-05-06 01:07:23.017099 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:07:23.017110 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:07:23.017119 | orchestrator | changed: [testbed-node-3] 2025-05-06 01:07:23.017129 | orchestrator | changed: [testbed-node-5] 2025-05-06 01:07:23.017139 | orchestrator | changed: [testbed-manager] 2025-05-06 01:07:23.017149 | orchestrator | 2025-05-06 01:07:23.017160 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-05-06 01:07:23.017170 | orchestrator | Tuesday 06 May 2025 01:06:38 +0000 (0:00:19.004) 0:03:16.359 *********** 2025-05-06 01:07:23.017180 | orchestrator | changed: [testbed-manager] 2025-05-06 01:07:23.017190 | orchestrator | 2025-05-06 01:07:23.017200 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-05-06 01:07:23.017210 | orchestrator | Tuesday 06 May 2025 01:06:50 +0000 (0:00:11.380) 0:03:27.740 *********** 2025-05-06 01:07:23.017220 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:07:23.017230 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:07:23.017240 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:07:23.017250 | orchestrator | 2025-05-06 01:07:23.017260 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-05-06 01:07:23.017271 | orchestrator | Tuesday 06 May 2025 01:07:03 +0000 (0:00:13.009) 0:03:40.749 *********** 2025-05-06 01:07:23.017281 | orchestrator | changed: [testbed-manager] 2025-05-06 01:07:23.017291 | orchestrator | 2025-05-06 01:07:23.017301 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-05-06 01:07:23.017311 | orchestrator | Tuesday 06 May 2025 01:07:10 +0000 (0:00:07.380) 0:03:48.130 *********** 2025-05-06 01:07:23.017321 | orchestrator | changed: [testbed-node-4] 2025-05-06 01:07:23.017331 | orchestrator | changed: [testbed-node-3] 2025-05-06 01:07:23.017341 | orchestrator | changed: [testbed-node-5] 2025-05-06 01:07:23.017351 | orchestrator | 2025-05-06 01:07:23.017361 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 01:07:23.017371 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-06 01:07:23.017381 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-06 01:07:23.017392 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-06 01:07:23.017402 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-06 01:07:23.017412 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-06 01:07:23.017423 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-06 01:07:23.017433 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-06 01:07:23.017443 | orchestrator | 2025-05-06 01:07:23.017453 | orchestrator | 2025-05-06 01:07:23.017463 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 01:07:23.017477 | orchestrator | Tuesday 06 May 2025 01:07:22 +0000 (0:00:11.720) 0:03:59.850 *********** 2025-05-06 01:07:23.017491 | orchestrator | =============================================================================== 2025-05-06 01:07:23.017501 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 39.99s 2025-05-06 01:07:23.017511 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 19.00s 2025-05-06 01:07:23.017522 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 18.05s 2025-05-06 01:07:23.017531 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 17.99s 2025-05-06 01:07:23.017541 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.28s 2025-05-06 01:07:23.017552 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 14.39s 2025-05-06 01:07:23.017561 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 13.01s 2025-05-06 01:07:23.017572 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.72s 2025-05-06 01:07:23.017581 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 11.38s 2025-05-06 01:07:23.017592 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 7.51s 2025-05-06 01:07:23.017602 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 7.38s 2025-05-06 01:07:23.017612 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.37s 2025-05-06 01:07:23.017622 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.45s 2025-05-06 01:07:23.017632 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.79s 2025-05-06 01:07:23.017642 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.52s 2025-05-06 01:07:23.017652 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.70s 2025-05-06 01:07:23.017662 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.46s 2025-05-06 01:07:23.017672 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.45s 2025-05-06 01:07:23.017686 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 3.37s 2025-05-06 01:07:26.053145 | orchestrator | prometheus : Copying over prometheus msteams config file ---------------- 3.13s 2025-05-06 01:07:26.053277 | orchestrator | 2025-05-06 01:07:22 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:07:26.053297 | orchestrator | 2025-05-06 01:07:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:07:26.053312 | orchestrator | 2025-05-06 01:07:23 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:07:26.053326 | orchestrator | 2025-05-06 01:07:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:07:26.053359 | orchestrator | 2025-05-06 01:07:26 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:07:26.054329 | orchestrator | 2025-05-06 01:07:26 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:07:26.055879 | orchestrator | 2025-05-06 01:07:26 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:07:26.058492 | orchestrator | 2025-05-06 01:07:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:07:26.060462 | orchestrator | 2025-05-06 01:07:26 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:07:29.107587 | orchestrator | 2025-05-06 01:07:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:07:29.107733 | orchestrator | 2025-05-06 01:07:29 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:07:29.108540 | orchestrator | 2025-05-06 01:07:29 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:07:29.109895 | orchestrator | 2025-05-06 01:07:29 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:07:29.111405 | orchestrator | 2025-05-06 01:07:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:07:29.112697 | orchestrator | 2025-05-06 01:07:29 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:07:32.158550 | orchestrator | 2025-05-06 01:07:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:07:32.158706 | orchestrator | 2025-05-06 01:07:32 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:07:32.159665 | orchestrator | 2025-05-06 01:07:32 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:07:32.162177 | orchestrator | 2025-05-06 01:07:32 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:07:32.164100 | orchestrator | 2025-05-06 01:07:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:07:32.165659 | orchestrator | 2025-05-06 01:07:32 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:07:35.219095 | orchestrator | 2025-05-06 01:07:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:07:35.219241 | orchestrator | 2025-05-06 01:07:35 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:07:35.222259 | orchestrator | 2025-05-06 01:07:35 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:07:35.223536 | orchestrator | 2025-05-06 01:07:35 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:07:35.226443 | orchestrator | 2025-05-06 01:07:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:07:35.226738 | orchestrator | 2025-05-06 01:07:35 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:07:38.271676 | orchestrator | 2025-05-06 01:07:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:07:38.271823 | orchestrator | 2025-05-06 01:07:38 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:07:38.273344 | orchestrator | 2025-05-06 01:07:38 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:07:38.274190 | orchestrator | 2025-05-06 01:07:38 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:07:38.275121 | orchestrator | 2025-05-06 01:07:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:07:38.276104 | orchestrator | 2025-05-06 01:07:38 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:07:41.317935 | orchestrator | 2025-05-06 01:07:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:07:41.318168 | orchestrator | 2025-05-06 01:07:41 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:07:41.318244 | orchestrator | 2025-05-06 01:07:41 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:07:41.318287 | orchestrator | 2025-05-06 01:07:41 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:07:41.318999 | orchestrator | 2025-05-06 01:07:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:07:41.319625 | orchestrator | 2025-05-06 01:07:41 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:07:44.367643 | orchestrator | 2025-05-06 01:07:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:07:44.367790 | orchestrator | 2025-05-06 01:07:44 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:07:44.371995 | orchestrator | 2025-05-06 01:07:44 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:07:44.372469 | orchestrator | 2025-05-06 01:07:44 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:07:44.372516 | orchestrator | 2025-05-06 01:07:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:07:44.373040 | orchestrator | 2025-05-06 01:07:44 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:07:47.423474 | orchestrator | 2025-05-06 01:07:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:07:47.423598 | orchestrator | 2025-05-06 01:07:47 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:07:47.423779 | orchestrator | 2025-05-06 01:07:47 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:07:47.424587 | orchestrator | 2025-05-06 01:07:47 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:07:47.427084 | orchestrator | 2025-05-06 01:07:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:07:47.427441 | orchestrator | 2025-05-06 01:07:47 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:07:47.427706 | orchestrator | 2025-05-06 01:07:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:07:50.477116 | orchestrator | 2025-05-06 01:07:50 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:07:50.478113 | orchestrator | 2025-05-06 01:07:50 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:07:50.478803 | orchestrator | 2025-05-06 01:07:50 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:07:50.481353 | orchestrator | 2025-05-06 01:07:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:07:50.482237 | orchestrator | 2025-05-06 01:07:50 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:07:53.525438 | orchestrator | 2025-05-06 01:07:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:07:53.525569 | orchestrator | 2025-05-06 01:07:53 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:07:53.526594 | orchestrator | 2025-05-06 01:07:53 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:07:53.527459 | orchestrator | 2025-05-06 01:07:53 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:07:53.528379 | orchestrator | 2025-05-06 01:07:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:07:53.529514 | orchestrator | 2025-05-06 01:07:53 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:07:56.560231 | orchestrator | 2025-05-06 01:07:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:07:56.560462 | orchestrator | 2025-05-06 01:07:56 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:07:56.561040 | orchestrator | 2025-05-06 01:07:56 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:07:56.561097 | orchestrator | 2025-05-06 01:07:56 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:07:56.561900 | orchestrator | 2025-05-06 01:07:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:07:56.562263 | orchestrator | 2025-05-06 01:07:56 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:07:59.603229 | orchestrator | 2025-05-06 01:07:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:07:59.603341 | orchestrator | 2025-05-06 01:07:59 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:07:59.604460 | orchestrator | 2025-05-06 01:07:59 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:07:59.606838 | orchestrator | 2025-05-06 01:07:59 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:07:59.608441 | orchestrator | 2025-05-06 01:07:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:07:59.609988 | orchestrator | 2025-05-06 01:07:59 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:08:02.663449 | orchestrator | 2025-05-06 01:07:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:08:02.663579 | orchestrator | 2025-05-06 01:08:02 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:08:02.664048 | orchestrator | 2025-05-06 01:08:02 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:08:02.665400 | orchestrator | 2025-05-06 01:08:02 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:08:02.666775 | orchestrator | 2025-05-06 01:08:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:08:02.668467 | orchestrator | 2025-05-06 01:08:02 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:08:05.713810 | orchestrator | 2025-05-06 01:08:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:08:05.713970 | orchestrator | 2025-05-06 01:08:05 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:08:05.715435 | orchestrator | 2025-05-06 01:08:05 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:08:05.718081 | orchestrator | 2025-05-06 01:08:05 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:08:05.719387 | orchestrator | 2025-05-06 01:08:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:08:05.720642 | orchestrator | 2025-05-06 01:08:05 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:08:05.721009 | orchestrator | 2025-05-06 01:08:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:08:08.774351 | orchestrator | 2025-05-06 01:08:08 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:08:08.775797 | orchestrator | 2025-05-06 01:08:08 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:08:08.777578 | orchestrator | 2025-05-06 01:08:08 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:08:08.779286 | orchestrator | 2025-05-06 01:08:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:08:08.781516 | orchestrator | 2025-05-06 01:08:08 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:08:11.837222 | orchestrator | 2025-05-06 01:08:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:08:11.837369 | orchestrator | 2025-05-06 01:08:11 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:08:11.839060 | orchestrator | 2025-05-06 01:08:11 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:08:11.842291 | orchestrator | 2025-05-06 01:08:11 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:08:11.844080 | orchestrator | 2025-05-06 01:08:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:08:11.846354 | orchestrator | 2025-05-06 01:08:11 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state STARTED 2025-05-06 01:08:11.846680 | orchestrator | 2025-05-06 01:08:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:08:14.888522 | orchestrator | 2025-05-06 01:08:14 | INFO  | Task ee951c42-a382-440b-ba40-989f812ca029 is in state STARTED 2025-05-06 01:08:14.893159 | orchestrator | 2025-05-06 01:08:14 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:08:14.895327 | orchestrator | 2025-05-06 01:08:14 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state STARTED 2025-05-06 01:08:14.896715 | orchestrator | 2025-05-06 01:08:14 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:08:14.899091 | orchestrator | 2025-05-06 01:08:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:08:14.901666 | orchestrator | 2025-05-06 01:08:14 | INFO  | Task 2f18d01f-b00e-44b6-b705-53f59fe574cf is in state SUCCESS 2025-05-06 01:08:14.903851 | orchestrator | 2025-05-06 01:08:14.903979 | orchestrator | 2025-05-06 01:08:14.904008 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 01:08:14.904034 | orchestrator | 2025-05-06 01:08:14.904059 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 01:08:14.904084 | orchestrator | Tuesday 06 May 2025 01:05:05 +0000 (0:00:00.200) 0:00:00.200 *********** 2025-05-06 01:08:14.904109 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:08:14.904135 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:08:14.904160 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:08:14.904183 | orchestrator | 2025-05-06 01:08:14.904208 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 01:08:14.904233 | orchestrator | Tuesday 06 May 2025 01:05:06 +0000 (0:00:00.258) 0:00:00.459 *********** 2025-05-06 01:08:14.904256 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-05-06 01:08:14.904371 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-05-06 01:08:14.904393 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-05-06 01:08:14.904415 | orchestrator | 2025-05-06 01:08:14.904437 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-05-06 01:08:14.904460 | orchestrator | 2025-05-06 01:08:14.904484 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-06 01:08:14.904969 | orchestrator | Tuesday 06 May 2025 01:05:06 +0000 (0:00:00.235) 0:00:00.695 *********** 2025-05-06 01:08:14.904995 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:08:14.905018 | orchestrator | 2025-05-06 01:08:14.905041 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-05-06 01:08:14.905062 | orchestrator | Tuesday 06 May 2025 01:05:06 +0000 (0:00:00.461) 0:00:01.157 *********** 2025-05-06 01:08:14.905084 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-05-06 01:08:14.905106 | orchestrator | 2025-05-06 01:08:14.905128 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-05-06 01:08:14.905170 | orchestrator | Tuesday 06 May 2025 01:05:10 +0000 (0:00:03.569) 0:00:04.726 *********** 2025-05-06 01:08:14.905193 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-05-06 01:08:14.905215 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-05-06 01:08:14.905236 | orchestrator | 2025-05-06 01:08:14.905257 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-05-06 01:08:14.905279 | orchestrator | Tuesday 06 May 2025 01:05:17 +0000 (0:00:06.688) 0:00:11.415 *********** 2025-05-06 01:08:14.905329 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-06 01:08:14.905353 | orchestrator | 2025-05-06 01:08:14.905374 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-05-06 01:08:14.905396 | orchestrator | Tuesday 06 May 2025 01:05:20 +0000 (0:00:03.441) 0:00:14.857 *********** 2025-05-06 01:08:14.905417 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-06 01:08:14.905439 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-05-06 01:08:14.905461 | orchestrator | 2025-05-06 01:08:14.905482 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-05-06 01:08:14.905504 | orchestrator | Tuesday 06 May 2025 01:05:24 +0000 (0:00:03.918) 0:00:18.776 *********** 2025-05-06 01:08:14.905525 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-06 01:08:14.905548 | orchestrator | 2025-05-06 01:08:14.905572 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-05-06 01:08:14.905595 | orchestrator | Tuesday 06 May 2025 01:05:27 +0000 (0:00:03.136) 0:00:21.912 *********** 2025-05-06 01:08:14.905619 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-05-06 01:08:14.905644 | orchestrator | 2025-05-06 01:08:14.905668 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-05-06 01:08:14.905690 | orchestrator | Tuesday 06 May 2025 01:05:31 +0000 (0:00:04.251) 0:00:26.164 *********** 2025-05-06 01:08:14.905778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-06 01:08:14.905808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-06 01:08:14.905858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-06 01:08:14.905931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-06 01:08:14.905969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-06 01:08:14.906098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-06 01:08:14.906138 | orchestrator | 2025-05-06 01:08:14.906160 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-06 01:08:14.906182 | orchestrator | Tuesday 06 May 2025 01:05:35 +0000 (0:00:04.008) 0:00:30.172 *********** 2025-05-06 01:08:14.906204 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:08:14.906226 | orchestrator | 2025-05-06 01:08:14.906248 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-05-06 01:08:14.906269 | orchestrator | Tuesday 06 May 2025 01:05:36 +0000 (0:00:00.542) 0:00:30.714 *********** 2025-05-06 01:08:14.906292 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:08:14.906313 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:08:14.906336 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:08:14.906357 | orchestrator | 2025-05-06 01:08:14.906379 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-05-06 01:08:14.906400 | orchestrator | Tuesday 06 May 2025 01:05:47 +0000 (0:00:11.344) 0:00:42.059 *********** 2025-05-06 01:08:14.906421 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-06 01:08:14.906443 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-06 01:08:14.906465 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-06 01:08:14.906487 | orchestrator | 2025-05-06 01:08:14.906509 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-05-06 01:08:14.906530 | orchestrator | Tuesday 06 May 2025 01:05:50 +0000 (0:00:02.290) 0:00:44.349 *********** 2025-05-06 01:08:14.906551 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-06 01:08:14.906573 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-06 01:08:14.906595 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-06 01:08:14.906617 | orchestrator | 2025-05-06 01:08:14.906638 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-05-06 01:08:14.906660 | orchestrator | Tuesday 06 May 2025 01:05:51 +0000 (0:00:01.237) 0:00:45.586 *********** 2025-05-06 01:08:14.906681 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:08:14.906703 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:08:14.906725 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:08:14.906746 | orchestrator | 2025-05-06 01:08:14.906768 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-05-06 01:08:14.906789 | orchestrator | Tuesday 06 May 2025 01:05:51 +0000 (0:00:00.597) 0:00:46.184 *********** 2025-05-06 01:08:14.906811 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:14.906832 | orchestrator | 2025-05-06 01:08:14.906854 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-05-06 01:08:14.906875 | orchestrator | Tuesday 06 May 2025 01:05:52 +0000 (0:00:00.178) 0:00:46.363 *********** 2025-05-06 01:08:14.906896 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:14.906941 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:08:14.906963 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:08:14.906985 | orchestrator | 2025-05-06 01:08:14.907006 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-06 01:08:14.907027 | orchestrator | Tuesday 06 May 2025 01:05:52 +0000 (0:00:00.218) 0:00:46.582 *********** 2025-05-06 01:08:14.907048 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:08:14.907067 | orchestrator | 2025-05-06 01:08:14.907097 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-05-06 01:08:14.907118 | orchestrator | Tuesday 06 May 2025 01:05:52 +0000 (0:00:00.603) 0:00:47.185 *********** 2025-05-06 01:08:14.907153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-06 01:08:14.907195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-06 01:08:14.907229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-06 01:08:14.907278 | orchestrator | 2025-05-06 01:08:14.907300 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-05-06 01:08:14.907322 | orchestrator | Tuesday 06 May 2025 01:05:56 +0000 (0:00:03.749) 0:00:50.934 *********** 2025-05-06 01:08:14.907345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-06 01:08:14.907367 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:08:14.907400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-06 01:08:14.907454 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:08:14.907477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-06 01:08:14.907500 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:14.907522 | orchestrator | 2025-05-06 01:08:14.907544 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-05-06 01:08:14.907572 | orchestrator | Tuesday 06 May 2025 01:06:01 +0000 (0:00:04.579) 0:00:55.513 *********** 2025-05-06 01:08:14.907604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-06 01:08:14.907645 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:14.907668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-06 01:08:14.907697 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:08:14.907719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-06 01:08:14.907751 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:08:14.907773 | orchestrator | 2025-05-06 01:08:14.907795 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-05-06 01:08:14.907817 | orchestrator | Tuesday 06 May 2025 01:06:06 +0000 (0:00:04.999) 0:01:00.513 *********** 2025-05-06 01:08:14.907838 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:14.907860 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:08:14.907882 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:08:14.907938 | orchestrator | 2025-05-06 01:08:14.907969 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-05-06 01:08:14.907991 | orchestrator | Tuesday 06 May 2025 01:06:09 +0000 (0:00:03.064) 0:01:03.578 *********** 2025-05-06 01:08:14.908013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-06 01:08:14.908051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-06 01:08:14.908096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-06 01:08:14.908133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-06 01:08:14.908176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-06 01:08:14.908213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-06 01:08:14.908236 | orchestrator | 2025-05-06 01:08:14.908258 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-05-06 01:08:14.908289 | orchestrator | Tuesday 06 May 2025 01:06:14 +0000 (0:00:04.765) 0:01:08.343 *********** 2025-05-06 01:08:14.908312 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:08:14.908333 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:08:14.908355 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:08:14.908376 | orchestrator | 2025-05-06 01:08:14.908398 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-05-06 01:08:14.908419 | orchestrator | Tuesday 06 May 2025 01:06:27 +0000 (0:00:12.899) 0:01:21.242 *********** 2025-05-06 01:08:14.908441 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:08:14.908463 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:08:14.908485 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:14.908506 | orchestrator | 2025-05-06 01:08:14.908527 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-05-06 01:08:14.908549 | orchestrator | Tuesday 06 May 2025 01:06:36 +0000 (0:00:09.149) 0:01:30.392 *********** 2025-05-06 01:08:14.908571 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:08:14.908599 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:14.908621 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:08:14.908642 | orchestrator | 2025-05-06 01:08:14.908664 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-05-06 01:08:14.908686 | orchestrator | Tuesday 06 May 2025 01:06:46 +0000 (0:00:09.930) 0:01:40.323 *********** 2025-05-06 01:08:14.908707 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:14.908729 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:08:14.908750 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:08:14.908772 | orchestrator | 2025-05-06 01:08:14.908794 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-05-06 01:08:14.908816 | orchestrator | Tuesday 06 May 2025 01:06:55 +0000 (0:00:09.536) 0:01:49.860 *********** 2025-05-06 01:08:14.908838 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:08:14.908866 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:14.908887 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:08:14.908929 | orchestrator | 2025-05-06 01:08:14.908951 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-05-06 01:08:14.908973 | orchestrator | Tuesday 06 May 2025 01:07:00 +0000 (0:00:04.368) 0:01:54.229 *********** 2025-05-06 01:08:14.908994 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:14.909016 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:08:14.909038 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:08:14.909057 | orchestrator | 2025-05-06 01:08:14.909079 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-05-06 01:08:14.909101 | orchestrator | Tuesday 06 May 2025 01:07:00 +0000 (0:00:00.305) 0:01:54.534 *********** 2025-05-06 01:08:14.909122 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-06 01:08:14.909144 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:14.909165 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-06 01:08:14.909187 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:08:14.909209 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-06 01:08:14.909231 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:08:14.909252 | orchestrator | 2025-05-06 01:08:14.909274 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-05-06 01:08:14.909296 | orchestrator | Tuesday 06 May 2025 01:07:02 +0000 (0:00:02.666) 0:01:57.201 *********** 2025-05-06 01:08:14.909318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-06 01:08:14.909376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-06 01:08:14.909399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-06 01:08:14.909431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-06 01:08:14.909486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-06 01:08:14.909534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-06 01:08:14.909557 | orchestrator | 2025-05-06 01:08:14.909579 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-06 01:08:14.909601 | orchestrator | Tuesday 06 May 2025 01:07:06 +0000 (0:00:03.827) 0:02:01.029 *********** 2025-05-06 01:08:14.909623 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:14.909644 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:08:14.909666 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:08:14.909687 | orchestrator | 2025-05-06 01:08:14.909716 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-05-06 01:08:14.909738 | orchestrator | Tuesday 06 May 2025 01:07:07 +0000 (0:00:00.280) 0:02:01.309 *********** 2025-05-06 01:08:14.909760 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:08:14.909782 | orchestrator | 2025-05-06 01:08:14.909803 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-05-06 01:08:14.909825 | orchestrator | Tuesday 06 May 2025 01:07:09 +0000 (0:00:02.209) 0:02:03.518 *********** 2025-05-06 01:08:14.909846 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:08:14.909867 | orchestrator | 2025-05-06 01:08:14.909889 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-05-06 01:08:14.909930 | orchestrator | Tuesday 06 May 2025 01:07:11 +0000 (0:00:02.443) 0:02:05.962 *********** 2025-05-06 01:08:14.909951 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:08:14.909973 | orchestrator | 2025-05-06 01:08:14.909995 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-05-06 01:08:14.910054 | orchestrator | Tuesday 06 May 2025 01:07:13 +0000 (0:00:02.205) 0:02:08.168 *********** 2025-05-06 01:08:14.910089 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:08:14.910111 | orchestrator | 2025-05-06 01:08:14.910132 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-05-06 01:08:14.910161 | orchestrator | Tuesday 06 May 2025 01:07:39 +0000 (0:00:25.287) 0:02:33.456 *********** 2025-05-06 01:08:14.910183 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:08:14.910204 | orchestrator | 2025-05-06 01:08:14.910226 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-06 01:08:14.910247 | orchestrator | Tuesday 06 May 2025 01:07:41 +0000 (0:00:02.473) 0:02:35.929 *********** 2025-05-06 01:08:14.910269 | orchestrator | 2025-05-06 01:08:14.910291 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-06 01:08:14.910313 | orchestrator | Tuesday 06 May 2025 01:07:41 +0000 (0:00:00.070) 0:02:36.000 *********** 2025-05-06 01:08:14.910334 | orchestrator | 2025-05-06 01:08:14.910356 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-06 01:08:14.910377 | orchestrator | Tuesday 06 May 2025 01:07:41 +0000 (0:00:00.066) 0:02:36.066 *********** 2025-05-06 01:08:14.910399 | orchestrator | 2025-05-06 01:08:14.910420 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-05-06 01:08:14.910442 | orchestrator | Tuesday 06 May 2025 01:07:42 +0000 (0:00:00.259) 0:02:36.325 *********** 2025-05-06 01:08:14.910464 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:08:14.910482 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:08:14.910500 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:08:14.910518 | orchestrator | 2025-05-06 01:08:14.910536 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 01:08:14.910554 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-06 01:08:14.910573 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-06 01:08:14.910591 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-06 01:08:14.910609 | orchestrator | 2025-05-06 01:08:14.910626 | orchestrator | 2025-05-06 01:08:14.910644 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 01:08:14.910662 | orchestrator | Tuesday 06 May 2025 01:08:12 +0000 (0:00:30.697) 0:03:07.022 *********** 2025-05-06 01:08:14.910680 | orchestrator | =============================================================================== 2025-05-06 01:08:14.910697 | orchestrator | glance : Restart glance-api container ---------------------------------- 30.70s 2025-05-06 01:08:14.910714 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 25.29s 2025-05-06 01:08:14.910732 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 12.90s 2025-05-06 01:08:14.910749 | orchestrator | glance : Ensuring glance service ceph config subdir exists ------------- 11.34s 2025-05-06 01:08:14.910767 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 9.93s 2025-05-06 01:08:14.910784 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 9.54s 2025-05-06 01:08:14.910802 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 9.15s 2025-05-06 01:08:14.910819 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.69s 2025-05-06 01:08:14.910837 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 5.00s 2025-05-06 01:08:14.910855 | orchestrator | glance : Copying over config.json files for services -------------------- 4.77s 2025-05-06 01:08:14.910872 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.58s 2025-05-06 01:08:14.910890 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.37s 2025-05-06 01:08:14.910953 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.25s 2025-05-06 01:08:14.910980 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.01s 2025-05-06 01:08:14.910997 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.92s 2025-05-06 01:08:14.911015 | orchestrator | glance : Check glance containers ---------------------------------------- 3.83s 2025-05-06 01:08:14.911033 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.75s 2025-05-06 01:08:14.911050 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.57s 2025-05-06 01:08:14.911071 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.44s 2025-05-06 01:08:14.911096 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.14s 2025-05-06 01:08:17.961631 | orchestrator | 2025-05-06 01:08:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:08:17.961781 | orchestrator | 2025-05-06 01:08:17 | INFO  | Task ee951c42-a382-440b-ba40-989f812ca029 is in state STARTED 2025-05-06 01:08:17.962985 | orchestrator | 2025-05-06 01:08:17 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:08:17.963025 | orchestrator | 2025-05-06 01:08:17 | INFO  | Task 82ca487e-63ab-4277-a98a-340bc9664dc4 is in state SUCCESS 2025-05-06 01:08:17.964707 | orchestrator | 2025-05-06 01:08:17.964742 | orchestrator | 2025-05-06 01:08:17.964754 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 01:08:17.964766 | orchestrator | 2025-05-06 01:08:17.964777 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 01:08:17.964789 | orchestrator | Tuesday 06 May 2025 01:05:25 +0000 (0:00:00.295) 0:00:00.295 *********** 2025-05-06 01:08:17.964800 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:08:17.964814 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:08:17.964825 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:08:17.964891 | orchestrator | ok: [testbed-node-3] 2025-05-06 01:08:17.964989 | orchestrator | ok: [testbed-node-4] 2025-05-06 01:08:17.965133 | orchestrator | ok: [testbed-node-5] 2025-05-06 01:08:17.965149 | orchestrator | 2025-05-06 01:08:17.965161 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 01:08:17.965561 | orchestrator | Tuesday 06 May 2025 01:05:25 +0000 (0:00:00.600) 0:00:00.895 *********** 2025-05-06 01:08:17.965584 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-05-06 01:08:17.965604 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-05-06 01:08:17.965623 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-05-06 01:08:17.965636 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-05-06 01:08:17.965647 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-05-06 01:08:17.965658 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-05-06 01:08:17.965669 | orchestrator | 2025-05-06 01:08:17.965776 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-05-06 01:08:17.965798 | orchestrator | 2025-05-06 01:08:17.966508 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-06 01:08:17.966537 | orchestrator | Tuesday 06 May 2025 01:05:26 +0000 (0:00:00.780) 0:00:01.676 *********** 2025-05-06 01:08:17.966549 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 01:08:17.966563 | orchestrator | 2025-05-06 01:08:17.966574 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-05-06 01:08:17.966586 | orchestrator | Tuesday 06 May 2025 01:05:27 +0000 (0:00:01.203) 0:00:02.880 *********** 2025-05-06 01:08:17.966598 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-05-06 01:08:17.966609 | orchestrator | 2025-05-06 01:08:17.966620 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-05-06 01:08:17.966632 | orchestrator | Tuesday 06 May 2025 01:05:31 +0000 (0:00:03.465) 0:00:06.346 *********** 2025-05-06 01:08:17.966670 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-05-06 01:08:17.966682 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-05-06 01:08:17.966693 | orchestrator | 2025-05-06 01:08:17.966705 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-05-06 01:08:17.966716 | orchestrator | Tuesday 06 May 2025 01:05:38 +0000 (0:00:06.640) 0:00:12.986 *********** 2025-05-06 01:08:17.966729 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-06 01:08:17.966740 | orchestrator | 2025-05-06 01:08:17.966751 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-05-06 01:08:17.966763 | orchestrator | Tuesday 06 May 2025 01:05:42 +0000 (0:00:03.951) 0:00:16.938 *********** 2025-05-06 01:08:17.966774 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-06 01:08:17.966785 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-05-06 01:08:17.966797 | orchestrator | 2025-05-06 01:08:17.966808 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-05-06 01:08:17.966819 | orchestrator | Tuesday 06 May 2025 01:05:46 +0000 (0:00:04.156) 0:00:21.094 *********** 2025-05-06 01:08:17.966830 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-06 01:08:17.966842 | orchestrator | 2025-05-06 01:08:17.966853 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-05-06 01:08:17.966864 | orchestrator | Tuesday 06 May 2025 01:05:49 +0000 (0:00:03.379) 0:00:24.474 *********** 2025-05-06 01:08:17.966875 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-05-06 01:08:17.966886 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-05-06 01:08:17.966920 | orchestrator | 2025-05-06 01:08:17.966933 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-05-06 01:08:17.966963 | orchestrator | Tuesday 06 May 2025 01:05:58 +0000 (0:00:08.948) 0:00:33.422 *********** 2025-05-06 01:08:17.967020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-06 01:08:17.967038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-06 01:08:17.967052 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.967075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.967089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.967101 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.967142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.967159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.967180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.967195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.967210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.967288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.967306 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.967328 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.967341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.967355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.967394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-06 01:08:17.967418 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.967439 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.967453 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.967467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.967489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.967527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.967547 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.967558 | orchestrator | 2025-05-06 01:08:17.967570 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-06 01:08:17.967581 | orchestrator | Tuesday 06 May 2025 01:06:01 +0000 (0:00:02.840) 0:00:36.262 *********** 2025-05-06 01:08:17.967593 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:17.967604 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:08:17.967616 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:08:17.967627 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 01:08:17.967638 | orchestrator | 2025-05-06 01:08:17.967650 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-05-06 01:08:17.967661 | orchestrator | Tuesday 06 May 2025 01:06:02 +0000 (0:00:01.478) 0:00:37.740 *********** 2025-05-06 01:08:17.967672 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-05-06 01:08:17.967683 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-05-06 01:08:17.967695 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-05-06 01:08:17.967706 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-05-06 01:08:17.967717 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-05-06 01:08:17.967728 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-05-06 01:08:17.967739 | orchestrator | 2025-05-06 01:08:17.967750 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-05-06 01:08:17.967761 | orchestrator | Tuesday 06 May 2025 01:06:06 +0000 (0:00:03.217) 0:00:40.958 *********** 2025-05-06 01:08:17.967774 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-06 01:08:17.967787 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-06 01:08:17.967839 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-06 01:08:17.967852 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-06 01:08:17.967875 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-06 01:08:17.967888 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-06 01:08:17.967935 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-06 01:08:17.967983 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-06 01:08:17.967997 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-06 01:08:17.968009 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-06 01:08:17.968033 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-06 01:08:17.968067 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-06 01:08:17.968086 | orchestrator | 2025-05-06 01:08:17.968098 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-05-06 01:08:17.968109 | orchestrator | Tuesday 06 May 2025 01:06:09 +0000 (0:00:03.454) 0:00:44.413 *********** 2025-05-06 01:08:17.968121 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-06 01:08:17.968132 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-06 01:08:17.968144 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-06 01:08:17.968155 | orchestrator | 2025-05-06 01:08:17.968171 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-05-06 01:08:17.968187 | orchestrator | Tuesday 06 May 2025 01:06:11 +0000 (0:00:01.528) 0:00:45.941 *********** 2025-05-06 01:08:17.968198 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-05-06 01:08:17.968209 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-05-06 01:08:17.968221 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-05-06 01:08:17.968232 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-05-06 01:08:17.968243 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-05-06 01:08:17.968254 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-05-06 01:08:17.968265 | orchestrator | 2025-05-06 01:08:17.968276 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-05-06 01:08:17.968288 | orchestrator | Tuesday 06 May 2025 01:06:13 +0000 (0:00:02.989) 0:00:48.930 *********** 2025-05-06 01:08:17.968299 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-05-06 01:08:17.968310 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-05-06 01:08:17.968322 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-05-06 01:08:17.968333 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-05-06 01:08:17.968344 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-05-06 01:08:17.968355 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-05-06 01:08:17.968366 | orchestrator | 2025-05-06 01:08:17.968377 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-05-06 01:08:17.968388 | orchestrator | Tuesday 06 May 2025 01:06:15 +0000 (0:00:01.083) 0:00:50.014 *********** 2025-05-06 01:08:17.968400 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:17.968411 | orchestrator | 2025-05-06 01:08:17.968422 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-05-06 01:08:17.968433 | orchestrator | Tuesday 06 May 2025 01:06:15 +0000 (0:00:00.206) 0:00:50.220 *********** 2025-05-06 01:08:17.968444 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:17.968455 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:08:17.968466 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:08:17.968477 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:08:17.968488 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:08:17.968499 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:08:17.968510 | orchestrator | 2025-05-06 01:08:17.968521 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-06 01:08:17.968533 | orchestrator | Tuesday 06 May 2025 01:06:16 +0000 (0:00:01.153) 0:00:51.374 *********** 2025-05-06 01:08:17.968545 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 01:08:17.968563 | orchestrator | 2025-05-06 01:08:17.968574 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-05-06 01:08:17.968585 | orchestrator | Tuesday 06 May 2025 01:06:17 +0000 (0:00:01.223) 0:00:52.597 *********** 2025-05-06 01:08:17.968596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-06 01:08:17.968634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-06 01:08:17.968657 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.968670 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.968681 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.968699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-06 01:08:17.968737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.968759 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.968772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.968783 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.968803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.968815 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.968827 | orchestrator | 2025-05-06 01:08:17.968838 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-05-06 01:08:17.968850 | orchestrator | Tuesday 06 May 2025 01:06:20 +0000 (0:00:02.857) 0:00:55.455 *********** 2025-05-06 01:08:17.968915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.968933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.968945 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:17.968956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.968975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.968987 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:08:17.969008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.969048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.969062 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:08:17.969073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.969085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.969106 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:08:17.969117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.969139 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.969151 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:08:17.969186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.969200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.969211 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:08:17.969223 | orchestrator | 2025-05-06 01:08:17.969234 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-05-06 01:08:17.969245 | orchestrator | Tuesday 06 May 2025 01:06:23 +0000 (0:00:02.844) 0:00:58.300 *********** 2025-05-06 01:08:17.969263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.969275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.969292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.969333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.969346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.969364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.969375 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:17.969387 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:08:17.969398 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:08:17.969410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.969421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.969433 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:08:17.969484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.969498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.969516 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:08:17.969528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.969550 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.969562 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:08:17.969573 | orchestrator | 2025-05-06 01:08:17.969584 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-05-06 01:08:17.969596 | orchestrator | Tuesday 06 May 2025 01:06:27 +0000 (0:00:03.721) 0:01:02.021 *********** 2025-05-06 01:08:17.969607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.969642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.969655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-06 01:08:17.969673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.969684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.969704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.969717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.969752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-06 01:08:17.969771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-06 01:08:17.969783 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.969804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.969839 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.969852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.969870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.969891 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.969932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.969952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.970011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.970097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.970345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.970361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.970373 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.970392 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.970413 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.970424 | orchestrator | 2025-05-06 01:08:17.970436 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-05-06 01:08:17.970447 | orchestrator | Tuesday 06 May 2025 01:06:30 +0000 (0:00:03.830) 0:01:05.852 *********** 2025-05-06 01:08:17.970458 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-06 01:08:17.970470 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:08:17.970481 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-06 01:08:17.970492 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:08:17.970503 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-06 01:08:17.970515 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:08:17.970526 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-06 01:08:17.970537 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-06 01:08:17.970548 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-06 01:08:17.970559 | orchestrator | 2025-05-06 01:08:17.970570 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-05-06 01:08:17.970581 | orchestrator | Tuesday 06 May 2025 01:06:34 +0000 (0:00:03.544) 0:01:09.397 *********** 2025-05-06 01:08:17.970592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.970604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.970623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.970640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.970652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.970664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.970676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-06 01:08:17.970688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-06 01:08:17.970711 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.970724 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.970736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-06 01:08:17.970747 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.970765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.970782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.970794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.970805 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.970817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.970829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.970853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.970866 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.970878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.970889 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.970963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.970983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.970994 | orchestrator | 2025-05-06 01:08:17.971011 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-05-06 01:08:17.971022 | orchestrator | Tuesday 06 May 2025 01:06:44 +0000 (0:00:10.437) 0:01:19.835 *********** 2025-05-06 01:08:17.971042 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:17.971061 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:08:17.971081 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:08:17.971101 | orchestrator | changed: [testbed-node-3] 2025-05-06 01:08:17.971122 | orchestrator | changed: [testbed-node-4] 2025-05-06 01:08:17.971134 | orchestrator | changed: [testbed-node-5] 2025-05-06 01:08:17.971145 | orchestrator | 2025-05-06 01:08:17.971156 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-05-06 01:08:17.971167 | orchestrator | Tuesday 06 May 2025 01:06:47 +0000 (0:00:02.451) 0:01:22.286 *********** 2025-05-06 01:08:17.971178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.971190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.971254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971296 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:17.971307 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:08:17.971317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.971338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971371 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:08:17.971381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.971397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971434 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:08:17.971444 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.971455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.971481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971518 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971534 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:08:17.971544 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971555 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:08:17.971565 | orchestrator | 2025-05-06 01:08:17.971575 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-05-06 01:08:17.971585 | orchestrator | Tuesday 06 May 2025 01:06:49 +0000 (0:00:01.670) 0:01:23.956 *********** 2025-05-06 01:08:17.971595 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:17.971606 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:08:17.971616 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:08:17.971626 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:08:17.971636 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:08:17.971646 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:08:17.971655 | orchestrator | 2025-05-06 01:08:17.971666 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-05-06 01:08:17.971680 | orchestrator | Tuesday 06 May 2025 01:06:50 +0000 (0:00:01.145) 0:01:25.101 *********** 2025-05-06 01:08:17.971695 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.971706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.971733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-06 01:08:17.971758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-06 01:08:17.971769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-06 01:08:17.971779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-06 01:08:17.971795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971805 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.971821 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.971832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.971842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.971861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.971942 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.971968 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.971980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.971991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.972007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-06 01:08:17.972018 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.972034 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-06 01:08:17.972044 | orchestrator | 2025-05-06 01:08:17.972055 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-06 01:08:17.972065 | orchestrator | Tuesday 06 May 2025 01:06:54 +0000 (0:00:04.003) 0:01:29.105 *********** 2025-05-06 01:08:17.972076 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:17.972086 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:08:17.972096 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:08:17.972106 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:08:17.972117 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:08:17.972127 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:08:17.972137 | orchestrator | 2025-05-06 01:08:17.972147 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-05-06 01:08:17.972157 | orchestrator | Tuesday 06 May 2025 01:06:55 +0000 (0:00:01.049) 0:01:30.154 *********** 2025-05-06 01:08:17.972167 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:08:17.972177 | orchestrator | 2025-05-06 01:08:17.972188 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-05-06 01:08:17.972198 | orchestrator | Tuesday 06 May 2025 01:06:57 +0000 (0:00:02.423) 0:01:32.577 *********** 2025-05-06 01:08:17.972208 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:08:17.972218 | orchestrator | 2025-05-06 01:08:17.972228 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-05-06 01:08:17.972238 | orchestrator | Tuesday 06 May 2025 01:06:59 +0000 (0:00:02.331) 0:01:34.909 *********** 2025-05-06 01:08:17.972248 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:08:17.972258 | orchestrator | 2025-05-06 01:08:17.972268 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-06 01:08:17.972278 | orchestrator | Tuesday 06 May 2025 01:07:16 +0000 (0:00:16.801) 0:01:51.711 *********** 2025-05-06 01:08:17.972288 | orchestrator | 2025-05-06 01:08:17.972299 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-06 01:08:17.972309 | orchestrator | Tuesday 06 May 2025 01:07:16 +0000 (0:00:00.065) 0:01:51.776 *********** 2025-05-06 01:08:17.972319 | orchestrator | 2025-05-06 01:08:17.972329 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-06 01:08:17.972339 | orchestrator | Tuesday 06 May 2025 01:07:17 +0000 (0:00:00.207) 0:01:51.983 *********** 2025-05-06 01:08:17.972350 | orchestrator | 2025-05-06 01:08:17.972360 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-06 01:08:17.972370 | orchestrator | Tuesday 06 May 2025 01:07:17 +0000 (0:00:00.053) 0:01:52.037 *********** 2025-05-06 01:08:17.972380 | orchestrator | 2025-05-06 01:08:17.972390 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-06 01:08:17.972400 | orchestrator | Tuesday 06 May 2025 01:07:17 +0000 (0:00:00.052) 0:01:52.089 *********** 2025-05-06 01:08:17.972410 | orchestrator | 2025-05-06 01:08:17.972420 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-06 01:08:17.972430 | orchestrator | Tuesday 06 May 2025 01:07:17 +0000 (0:00:00.053) 0:01:52.143 *********** 2025-05-06 01:08:17.972440 | orchestrator | 2025-05-06 01:08:17.972450 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-05-06 01:08:17.972465 | orchestrator | Tuesday 06 May 2025 01:07:17 +0000 (0:00:00.240) 0:01:52.383 *********** 2025-05-06 01:08:17.972475 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:08:17.972485 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:08:17.972495 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:08:17.972505 | orchestrator | 2025-05-06 01:08:17.972515 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-05-06 01:08:17.972525 | orchestrator | Tuesday 06 May 2025 01:07:34 +0000 (0:00:17.515) 0:02:09.898 *********** 2025-05-06 01:08:17.972536 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:08:17.972546 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:08:17.972556 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:08:17.972566 | orchestrator | 2025-05-06 01:08:17.972576 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-05-06 01:08:17.972591 | orchestrator | Tuesday 06 May 2025 01:07:40 +0000 (0:00:05.319) 0:02:15.218 *********** 2025-05-06 01:08:21.011859 | orchestrator | changed: [testbed-node-3] 2025-05-06 01:08:21.012025 | orchestrator | changed: [testbed-node-5] 2025-05-06 01:08:21.012045 | orchestrator | changed: [testbed-node-4] 2025-05-06 01:08:21.012060 | orchestrator | 2025-05-06 01:08:21.012076 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-05-06 01:08:21.012091 | orchestrator | Tuesday 06 May 2025 01:08:04 +0000 (0:00:24.566) 0:02:39.785 *********** 2025-05-06 01:08:21.012105 | orchestrator | changed: [testbed-node-3] 2025-05-06 01:08:21.012119 | orchestrator | changed: [testbed-node-5] 2025-05-06 01:08:21.012133 | orchestrator | changed: [testbed-node-4] 2025-05-06 01:08:21.012147 | orchestrator | 2025-05-06 01:08:21.012161 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-05-06 01:08:21.012176 | orchestrator | Tuesday 06 May 2025 01:08:15 +0000 (0:00:10.689) 0:02:50.475 *********** 2025-05-06 01:08:21.012190 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:08:21.012204 | orchestrator | 2025-05-06 01:08:21.012218 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 01:08:21.012233 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-06 01:08:21.012249 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-06 01:08:21.012263 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-06 01:08:21.012276 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-06 01:08:21.012291 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-06 01:08:21.012305 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-06 01:08:21.012319 | orchestrator | 2025-05-06 01:08:21.012333 | orchestrator | 2025-05-06 01:08:21.012369 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 01:08:21.012386 | orchestrator | Tuesday 06 May 2025 01:08:16 +0000 (0:00:00.539) 0:02:51.014 *********** 2025-05-06 01:08:21.012402 | orchestrator | =============================================================================== 2025-05-06 01:08:21.012419 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 24.57s 2025-05-06 01:08:21.012443 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 17.52s 2025-05-06 01:08:21.012467 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 16.80s 2025-05-06 01:08:21.012491 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.69s 2025-05-06 01:08:21.012551 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.44s 2025-05-06 01:08:21.012570 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.95s 2025-05-06 01:08:21.012587 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.64s 2025-05-06 01:08:21.012603 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.32s 2025-05-06 01:08:21.012619 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.16s 2025-05-06 01:08:21.012636 | orchestrator | cinder : Check cinder containers ---------------------------------------- 4.00s 2025-05-06 01:08:21.012652 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.95s 2025-05-06 01:08:21.012668 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.83s 2025-05-06 01:08:21.012685 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 3.72s 2025-05-06 01:08:21.012701 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 3.54s 2025-05-06 01:08:21.012718 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.47s 2025-05-06 01:08:21.012734 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.45s 2025-05-06 01:08:21.012748 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.38s 2025-05-06 01:08:21.012762 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 3.22s 2025-05-06 01:08:21.012776 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.99s 2025-05-06 01:08:21.012789 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 2.86s 2025-05-06 01:08:21.012803 | orchestrator | 2025-05-06 01:08:17 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:08:21.012818 | orchestrator | 2025-05-06 01:08:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:08:21.012832 | orchestrator | 2025-05-06 01:08:17 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:08:21.012846 | orchestrator | 2025-05-06 01:08:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:08:21.012886 | orchestrator | 2025-05-06 01:08:21 | INFO  | Task ee951c42-a382-440b-ba40-989f812ca029 is in state STARTED 2025-05-06 01:08:21.013983 | orchestrator | 2025-05-06 01:08:21 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:08:21.015802 | orchestrator | 2025-05-06 01:08:21 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:08:21.017136 | orchestrator | 2025-05-06 01:08:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:08:21.018607 | orchestrator | 2025-05-06 01:08:21 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:08:24.064318 | orchestrator | 2025-05-06 01:08:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:08:24.064425 | orchestrator | 2025-05-06 01:08:24 | INFO  | Task ee951c42-a382-440b-ba40-989f812ca029 is in state STARTED 2025-05-06 01:08:24.065413 | orchestrator | 2025-05-06 01:08:24 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:08:24.066748 | orchestrator | 2025-05-06 01:08:24 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:08:24.068371 | orchestrator | 2025-05-06 01:08:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:08:24.071482 | orchestrator | 2025-05-06 01:08:24 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:08:24.071542 | orchestrator | 2025-05-06 01:08:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:08:27.122880 | orchestrator | 2025-05-06 01:08:27 | INFO  | Task ee951c42-a382-440b-ba40-989f812ca029 is in state STARTED 2025-05-06 01:08:27.123931 | orchestrator | 2025-05-06 01:08:27 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:08:27.126215 | orchestrator | 2025-05-06 01:08:27 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:08:27.128035 | orchestrator | 2025-05-06 01:08:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:08:27.129551 | orchestrator | 2025-05-06 01:08:27 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:08:30.178866 | orchestrator | 2025-05-06 01:08:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:08:30.179063 | orchestrator | 2025-05-06 01:08:30 | INFO  | Task ee951c42-a382-440b-ba40-989f812ca029 is in state STARTED 2025-05-06 01:08:33.237486 | orchestrator | 2025-05-06 01:08:30 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:08:33.237619 | orchestrator | 2025-05-06 01:08:30 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:08:33.237638 | orchestrator | 2025-05-06 01:08:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:08:33.237653 | orchestrator | 2025-05-06 01:08:30 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:08:33.237669 | orchestrator | 2025-05-06 01:08:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:08:33.237703 | orchestrator | 2025-05-06 01:08:33 | INFO  | Task ee951c42-a382-440b-ba40-989f812ca029 is in state STARTED 2025-05-06 01:08:33.241157 | orchestrator | 2025-05-06 01:08:33 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:08:33.242221 | orchestrator | 2025-05-06 01:08:33 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:08:33.244268 | orchestrator | 2025-05-06 01:08:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:08:33.245508 | orchestrator | 2025-05-06 01:08:33 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:08:33.245975 | orchestrator | 2025-05-06 01:08:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:08:36.297238 | orchestrator | 2025-05-06 01:08:36 | INFO  | Task ee951c42-a382-440b-ba40-989f812ca029 is in state STARTED 2025-05-06 01:08:36.298587 | orchestrator | 2025-05-06 01:08:36 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:08:36.299496 | orchestrator | 2025-05-06 01:08:36 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:08:36.301168 | orchestrator | 2025-05-06 01:08:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:08:36.302373 | orchestrator | 2025-05-06 01:08:36 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:08:39.352494 | orchestrator | 2025-05-06 01:08:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:08:39.352647 | orchestrator | 2025-05-06 01:08:39 | INFO  | Task ee951c42-a382-440b-ba40-989f812ca029 is in state STARTED 2025-05-06 01:08:39.353127 | orchestrator | 2025-05-06 01:08:39 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:08:39.354718 | orchestrator | 2025-05-06 01:08:39 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:08:39.355652 | orchestrator | 2025-05-06 01:08:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:08:39.359110 | orchestrator | 2025-05-06 01:08:39 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:08:42.405712 | orchestrator | 2025-05-06 01:08:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:08:42.405856 | orchestrator | 2025-05-06 01:08:42 | INFO  | Task ee951c42-a382-440b-ba40-989f812ca029 is in state STARTED 2025-05-06 01:08:42.406138 | orchestrator | 2025-05-06 01:08:42 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:08:42.412978 | orchestrator | 2025-05-06 01:08:42 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:08:42.415159 | orchestrator | 2025-05-06 01:08:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:08:42.416039 | orchestrator | 2025-05-06 01:08:42 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:08:45.468114 | orchestrator | 2025-05-06 01:08:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:08:45.468349 | orchestrator | 2025-05-06 01:08:45 | INFO  | Task ee951c42-a382-440b-ba40-989f812ca029 is in state STARTED 2025-05-06 01:08:45.469783 | orchestrator | 2025-05-06 01:08:45 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:08:45.469809 | orchestrator | 2025-05-06 01:08:45 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:08:45.469827 | orchestrator | 2025-05-06 01:08:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:08:45.472518 | orchestrator | 2025-05-06 01:08:45 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:08:48.525636 | orchestrator | 2025-05-06 01:08:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:08:48.525766 | orchestrator | 2025-05-06 01:08:48 | INFO  | Task ee951c42-a382-440b-ba40-989f812ca029 is in state STARTED 2025-05-06 01:08:48.531205 | orchestrator | 2025-05-06 01:08:48 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:08:48.531285 | orchestrator | 2025-05-06 01:08:48 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:08:48.534493 | orchestrator | 2025-05-06 01:08:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:08:48.536298 | orchestrator | 2025-05-06 01:08:48 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:08:51.581569 | orchestrator | 2025-05-06 01:08:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:08:51.581718 | orchestrator | 2025-05-06 01:08:51 | INFO  | Task ee951c42-a382-440b-ba40-989f812ca029 is in state STARTED 2025-05-06 01:08:51.582149 | orchestrator | 2025-05-06 01:08:51 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:08:51.583083 | orchestrator | 2025-05-06 01:08:51 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:08:51.583702 | orchestrator | 2025-05-06 01:08:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:08:51.584389 | orchestrator | 2025-05-06 01:08:51 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:08:54.635652 | orchestrator | 2025-05-06 01:08:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:08:54.635796 | orchestrator | 2025-05-06 01:08:54 | INFO  | Task ee951c42-a382-440b-ba40-989f812ca029 is in state STARTED 2025-05-06 01:08:54.637018 | orchestrator | 2025-05-06 01:08:54 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:08:54.638463 | orchestrator | 2025-05-06 01:08:54 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:08:54.639977 | orchestrator | 2025-05-06 01:08:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:08:54.641082 | orchestrator | 2025-05-06 01:08:54 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:08:57.689762 | orchestrator | 2025-05-06 01:08:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:08:57.689970 | orchestrator | 2025-05-06 01:08:57 | INFO  | Task ee951c42-a382-440b-ba40-989f812ca029 is in state STARTED 2025-05-06 01:08:57.690415 | orchestrator | 2025-05-06 01:08:57 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:08:57.694084 | orchestrator | 2025-05-06 01:08:57 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:08:57.695655 | orchestrator | 2025-05-06 01:08:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:08:57.697671 | orchestrator | 2025-05-06 01:08:57 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:09:00.753765 | orchestrator | 2025-05-06 01:08:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:09:00.753957 | orchestrator | 2025-05-06 01:09:00 | INFO  | Task ee951c42-a382-440b-ba40-989f812ca029 is in state STARTED 2025-05-06 01:09:00.755154 | orchestrator | 2025-05-06 01:09:00 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:09:00.757120 | orchestrator | 2025-05-06 01:09:00 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:09:00.759146 | orchestrator | 2025-05-06 01:09:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:09:00.760355 | orchestrator | 2025-05-06 01:09:00 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:09:03.823625 | orchestrator | 2025-05-06 01:09:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:09:03.823787 | orchestrator | 2025-05-06 01:09:03 | INFO  | Task ee951c42-a382-440b-ba40-989f812ca029 is in state STARTED 2025-05-06 01:09:03.824263 | orchestrator | 2025-05-06 01:09:03 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:09:03.824310 | orchestrator | 2025-05-06 01:09:03 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:09:03.825050 | orchestrator | 2025-05-06 01:09:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:09:03.826294 | orchestrator | 2025-05-06 01:09:03 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:09:06.877292 | orchestrator | 2025-05-06 01:09:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:09:06.877430 | orchestrator | 2025-05-06 01:09:06 | INFO  | Task ee951c42-a382-440b-ba40-989f812ca029 is in state STARTED 2025-05-06 01:09:06.879365 | orchestrator | 2025-05-06 01:09:06 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:09:06.880889 | orchestrator | 2025-05-06 01:09:06 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:09:06.882562 | orchestrator | 2025-05-06 01:09:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:09:06.884356 | orchestrator | 2025-05-06 01:09:06 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:09:09.945123 | orchestrator | 2025-05-06 01:09:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:09:09.945273 | orchestrator | 2025-05-06 01:09:09 | INFO  | Task ee951c42-a382-440b-ba40-989f812ca029 is in state STARTED 2025-05-06 01:09:09.947028 | orchestrator | 2025-05-06 01:09:09 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:09:09.950215 | orchestrator | 2025-05-06 01:09:09 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:09:09.951581 | orchestrator | 2025-05-06 01:09:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:09:09.953342 | orchestrator | 2025-05-06 01:09:09 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:09:13.006141 | orchestrator | 2025-05-06 01:09:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:09:13.006421 | orchestrator | 2025-05-06 01:09:13 | INFO  | Task ee951c42-a382-440b-ba40-989f812ca029 is in state SUCCESS 2025-05-06 01:09:13.006461 | orchestrator | 2025-05-06 01:09:13 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:09:13.006903 | orchestrator | 2025-05-06 01:09:13 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:09:13.007732 | orchestrator | 2025-05-06 01:09:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:09:13.008510 | orchestrator | 2025-05-06 01:09:13 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:09:16.067162 | orchestrator | 2025-05-06 01:09:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:09:16.067320 | orchestrator | 2025-05-06 01:09:16 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:09:16.068805 | orchestrator | 2025-05-06 01:09:16 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:09:16.071161 | orchestrator | 2025-05-06 01:09:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:09:16.072875 | orchestrator | 2025-05-06 01:09:16 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:09:16.073203 | orchestrator | 2025-05-06 01:09:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:09:19.134346 | orchestrator | 2025-05-06 01:09:19 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:09:19.135224 | orchestrator | 2025-05-06 01:09:19 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:09:19.137491 | orchestrator | 2025-05-06 01:09:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:09:19.139246 | orchestrator | 2025-05-06 01:09:19 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:09:22.181129 | orchestrator | 2025-05-06 01:09:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:09:22.181282 | orchestrator | 2025-05-06 01:09:22 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:09:22.183020 | orchestrator | 2025-05-06 01:09:22 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:09:22.184498 | orchestrator | 2025-05-06 01:09:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:09:22.185655 | orchestrator | 2025-05-06 01:09:22 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:09:25.232958 | orchestrator | 2025-05-06 01:09:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:09:25.233125 | orchestrator | 2025-05-06 01:09:25 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:09:25.241222 | orchestrator | 2025-05-06 01:09:25 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:09:28.284472 | orchestrator | 2025-05-06 01:09:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:09:28.284629 | orchestrator | 2025-05-06 01:09:25 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:09:28.284667 | orchestrator | 2025-05-06 01:09:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:09:28.284701 | orchestrator | 2025-05-06 01:09:28 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:09:28.286235 | orchestrator | 2025-05-06 01:09:28 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:09:28.287181 | orchestrator | 2025-05-06 01:09:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:09:28.288728 | orchestrator | 2025-05-06 01:09:28 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:09:31.338764 | orchestrator | 2025-05-06 01:09:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:09:31.338961 | orchestrator | 2025-05-06 01:09:31 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:09:31.340840 | orchestrator | 2025-05-06 01:09:31 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:09:31.343569 | orchestrator | 2025-05-06 01:09:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:09:31.345587 | orchestrator | 2025-05-06 01:09:31 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:09:34.398852 | orchestrator | 2025-05-06 01:09:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:09:34.398963 | orchestrator | 2025-05-06 01:09:34 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:09:34.403288 | orchestrator | 2025-05-06 01:09:34 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:09:34.405246 | orchestrator | 2025-05-06 01:09:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:09:34.407178 | orchestrator | 2025-05-06 01:09:34 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:09:37.457365 | orchestrator | 2025-05-06 01:09:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:09:37.457521 | orchestrator | 2025-05-06 01:09:37 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:09:37.458887 | orchestrator | 2025-05-06 01:09:37 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:09:37.460930 | orchestrator | 2025-05-06 01:09:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:09:37.461967 | orchestrator | 2025-05-06 01:09:37 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:09:37.462158 | orchestrator | 2025-05-06 01:09:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:09:40.505193 | orchestrator | 2025-05-06 01:09:40 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:09:40.506877 | orchestrator | 2025-05-06 01:09:40 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:09:40.508904 | orchestrator | 2025-05-06 01:09:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:09:40.510243 | orchestrator | 2025-05-06 01:09:40 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:09:43.568976 | orchestrator | 2025-05-06 01:09:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:09:43.569120 | orchestrator | 2025-05-06 01:09:43 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:09:43.569747 | orchestrator | 2025-05-06 01:09:43 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:09:43.570911 | orchestrator | 2025-05-06 01:09:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:09:43.572358 | orchestrator | 2025-05-06 01:09:43 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:09:46.628422 | orchestrator | 2025-05-06 01:09:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:09:46.628567 | orchestrator | 2025-05-06 01:09:46 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:09:46.629968 | orchestrator | 2025-05-06 01:09:46 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:09:46.632137 | orchestrator | 2025-05-06 01:09:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:09:46.633572 | orchestrator | 2025-05-06 01:09:46 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:09:49.681958 | orchestrator | 2025-05-06 01:09:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:09:49.682163 | orchestrator | 2025-05-06 01:09:49 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:09:49.685267 | orchestrator | 2025-05-06 01:09:49 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:09:49.688022 | orchestrator | 2025-05-06 01:09:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:09:49.690567 | orchestrator | 2025-05-06 01:09:49 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:09:52.741593 | orchestrator | 2025-05-06 01:09:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:09:52.741775 | orchestrator | 2025-05-06 01:09:52 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:09:52.744211 | orchestrator | 2025-05-06 01:09:52 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:09:52.746638 | orchestrator | 2025-05-06 01:09:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:09:52.748549 | orchestrator | 2025-05-06 01:09:52 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:09:55.804331 | orchestrator | 2025-05-06 01:09:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:09:55.804480 | orchestrator | 2025-05-06 01:09:55 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:09:55.807118 | orchestrator | 2025-05-06 01:09:55 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:09:55.808310 | orchestrator | 2025-05-06 01:09:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:09:55.812541 | orchestrator | 2025-05-06 01:09:55 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:09:58.861619 | orchestrator | 2025-05-06 01:09:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:09:58.861830 | orchestrator | 2025-05-06 01:09:58 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:09:58.863985 | orchestrator | 2025-05-06 01:09:58 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:09:58.866564 | orchestrator | 2025-05-06 01:09:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:09:58.868085 | orchestrator | 2025-05-06 01:09:58 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:10:01.917133 | orchestrator | 2025-05-06 01:09:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:10:01.917259 | orchestrator | 2025-05-06 01:10:01 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:10:01.919645 | orchestrator | 2025-05-06 01:10:01 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state STARTED 2025-05-06 01:10:01.921483 | orchestrator | 2025-05-06 01:10:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:10:01.923259 | orchestrator | 2025-05-06 01:10:01 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:10:01.923470 | orchestrator | 2025-05-06 01:10:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:10:04.974354 | orchestrator | 2025-05-06 01:10:04 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:10:04.974623 | orchestrator | 2025-05-06 01:10:04 | INFO  | Task 6cbb1036-e398-4931-92ba-62928578a709 is in state SUCCESS 2025-05-06 01:10:04.978313 | orchestrator | 2025-05-06 01:10:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:10:04.980362 | orchestrator | 2025-05-06 01:10:04 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state STARTED 2025-05-06 01:10:08.033844 | orchestrator | 2025-05-06 01:10:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:10:08.033988 | orchestrator | 2025-05-06 01:10:08 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:10:08.037103 | orchestrator | 2025-05-06 01:10:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:10:08.037146 | orchestrator | 2025-05-06 01:10:08 | INFO  | Task 43d93093-40f4-44cb-b9de-20e7b097a86e is in state SUCCESS 2025-05-06 01:10:08.038642 | orchestrator | 2025-05-06 01:10:08.038682 | orchestrator | 2025-05-06 01:10:08.038698 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 01:10:08.038713 | orchestrator | 2025-05-06 01:10:08.038759 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 01:10:08.038776 | orchestrator | Tuesday 06 May 2025 01:08:16 +0000 (0:00:00.277) 0:00:00.277 *********** 2025-05-06 01:10:08.038790 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:10:08.038806 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:10:08.038887 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:10:08.038982 | orchestrator | 2025-05-06 01:10:08.038999 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 01:10:08.039293 | orchestrator | Tuesday 06 May 2025 01:08:16 +0000 (0:00:00.349) 0:00:00.627 *********** 2025-05-06 01:10:08.039346 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-05-06 01:10:08.039793 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-05-06 01:10:08.039821 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-05-06 01:10:08.039841 | orchestrator | 2025-05-06 01:10:08.039856 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-05-06 01:10:08.039869 | orchestrator | 2025-05-06 01:10:08.039883 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-06 01:10:08.039897 | orchestrator | Tuesday 06 May 2025 01:08:16 +0000 (0:00:00.287) 0:00:00.914 *********** 2025-05-06 01:10:08.039911 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:10:08.039927 | orchestrator | 2025-05-06 01:10:08.039960 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-05-06 01:10:08.039974 | orchestrator | Tuesday 06 May 2025 01:08:17 +0000 (0:00:00.684) 0:00:01.598 *********** 2025-05-06 01:10:08.040010 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-05-06 01:10:08.040024 | orchestrator | 2025-05-06 01:10:08.040039 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-05-06 01:10:08.040081 | orchestrator | Tuesday 06 May 2025 01:08:21 +0000 (0:00:03.782) 0:00:05.381 *********** 2025-05-06 01:10:08.040096 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-05-06 01:10:08.040111 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-05-06 01:10:08.040125 | orchestrator | 2025-05-06 01:10:08.040139 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-05-06 01:10:08.040153 | orchestrator | Tuesday 06 May 2025 01:08:27 +0000 (0:00:06.729) 0:00:12.110 *********** 2025-05-06 01:10:08.040166 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-06 01:10:08.040181 | orchestrator | 2025-05-06 01:10:08.040194 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-05-06 01:10:08.040208 | orchestrator | Tuesday 06 May 2025 01:08:31 +0000 (0:00:03.488) 0:00:15.599 *********** 2025-05-06 01:10:08.040222 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-06 01:10:08.040236 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-06 01:10:08.040250 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-06 01:10:08.040264 | orchestrator | 2025-05-06 01:10:08.040278 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-05-06 01:10:08.040291 | orchestrator | Tuesday 06 May 2025 01:08:39 +0000 (0:00:08.365) 0:00:23.964 *********** 2025-05-06 01:10:08.040305 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-06 01:10:08.040319 | orchestrator | 2025-05-06 01:10:08.040332 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-05-06 01:10:08.040346 | orchestrator | Tuesday 06 May 2025 01:08:43 +0000 (0:00:03.320) 0:00:27.285 *********** 2025-05-06 01:10:08.040360 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-06 01:10:08.040374 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-06 01:10:08.040390 | orchestrator | 2025-05-06 01:10:08.040406 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-05-06 01:10:08.040424 | orchestrator | Tuesday 06 May 2025 01:08:51 +0000 (0:00:08.171) 0:00:35.457 *********** 2025-05-06 01:10:08.040441 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-05-06 01:10:08.040457 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-05-06 01:10:08.040473 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-05-06 01:10:08.040491 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-05-06 01:10:08.040514 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-05-06 01:10:08.040531 | orchestrator | 2025-05-06 01:10:08.040549 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-06 01:10:08.040565 | orchestrator | Tuesday 06 May 2025 01:09:07 +0000 (0:00:16.323) 0:00:51.780 *********** 2025-05-06 01:10:08.040582 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:10:08.040599 | orchestrator | 2025-05-06 01:10:08.040615 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-05-06 01:10:08.040631 | orchestrator | Tuesday 06 May 2025 01:09:08 +0000 (0:00:00.779) 0:00:52.560 *********** 2025-05-06 01:10:08.040699 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.: ", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request.: "} 2025-05-06 01:10:08.040720 | orchestrator | 2025-05-06 01:10:08.040777 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 01:10:08.040808 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-06 01:10:08.040824 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:10:08.040838 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:10:08.040852 | orchestrator | 2025-05-06 01:10:08.040866 | orchestrator | 2025-05-06 01:10:08.040879 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 01:10:08.040893 | orchestrator | Tuesday 06 May 2025 01:09:11 +0000 (0:00:03.601) 0:00:56.162 *********** 2025-05-06 01:10:08.040907 | orchestrator | =============================================================================== 2025-05-06 01:10:08.040921 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.32s 2025-05-06 01:10:08.040934 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.37s 2025-05-06 01:10:08.040948 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.17s 2025-05-06 01:10:08.040962 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.73s 2025-05-06 01:10:08.040976 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.78s 2025-05-06 01:10:08.040989 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.60s 2025-05-06 01:10:08.041003 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.49s 2025-05-06 01:10:08.041017 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.32s 2025-05-06 01:10:08.041031 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.78s 2025-05-06 01:10:08.041045 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.68s 2025-05-06 01:10:08.041064 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-05-06 01:10:08.041078 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.29s 2025-05-06 01:10:08.041092 | orchestrator | 2025-05-06 01:10:08.041105 | orchestrator | 2025-05-06 01:10:08.041119 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 01:10:08.041133 | orchestrator | 2025-05-06 01:10:08.041146 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 01:10:08.041160 | orchestrator | Tuesday 06 May 2025 01:07:25 +0000 (0:00:00.213) 0:00:00.213 *********** 2025-05-06 01:10:08.041174 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:10:08.041188 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:10:08.041202 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:10:08.041217 | orchestrator | 2025-05-06 01:10:08.041230 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 01:10:08.041244 | orchestrator | Tuesday 06 May 2025 01:07:25 +0000 (0:00:00.366) 0:00:00.579 *********** 2025-05-06 01:10:08.041258 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-05-06 01:10:08.041272 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-05-06 01:10:08.041286 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-05-06 01:10:08.041299 | orchestrator | 2025-05-06 01:10:08.041313 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-05-06 01:10:08.041327 | orchestrator | 2025-05-06 01:10:08.041341 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-05-06 01:10:08.041355 | orchestrator | Tuesday 06 May 2025 01:07:26 +0000 (0:00:00.445) 0:00:01.024 *********** 2025-05-06 01:10:08.041369 | orchestrator | 2025-05-06 01:10:08.041383 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-05-06 01:10:08.041397 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:10:08.041419 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:10:08.041435 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:10:08.041456 | orchestrator | 2025-05-06 01:10:08.041471 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 01:10:08.041485 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:10:08.041499 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:10:08.041513 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:10:08.041527 | orchestrator | 2025-05-06 01:10:08.041541 | orchestrator | 2025-05-06 01:10:08.041554 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 01:10:08.041568 | orchestrator | Tuesday 06 May 2025 01:10:03 +0000 (0:02:37.913) 0:02:38.938 *********** 2025-05-06 01:10:08.041582 | orchestrator | =============================================================================== 2025-05-06 01:10:08.041595 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 157.91s 2025-05-06 01:10:08.041609 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2025-05-06 01:10:08.041623 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2025-05-06 01:10:08.041636 | orchestrator | 2025-05-06 01:10:08.041650 | orchestrator | 2025-05-06 01:10:08.041663 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 01:10:08.041677 | orchestrator | 2025-05-06 01:10:08.041781 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 01:10:08.041801 | orchestrator | Tuesday 06 May 2025 01:08:19 +0000 (0:00:00.279) 0:00:00.279 *********** 2025-05-06 01:10:08.041815 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:10:08.041828 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:10:08.041842 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:10:08.041856 | orchestrator | 2025-05-06 01:10:08.041870 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 01:10:08.041884 | orchestrator | Tuesday 06 May 2025 01:08:19 +0000 (0:00:00.349) 0:00:00.629 *********** 2025-05-06 01:10:08.041898 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-05-06 01:10:08.041911 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-05-06 01:10:08.041925 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-05-06 01:10:08.041939 | orchestrator | 2025-05-06 01:10:08.041953 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-05-06 01:10:08.041967 | orchestrator | 2025-05-06 01:10:08.041981 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-06 01:10:08.041995 | orchestrator | Tuesday 06 May 2025 01:08:19 +0000 (0:00:00.264) 0:00:00.893 *********** 2025-05-06 01:10:08.042009 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:10:08.042077 | orchestrator | 2025-05-06 01:10:08.042092 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-05-06 01:10:08.042106 | orchestrator | Tuesday 06 May 2025 01:08:20 +0000 (0:00:00.691) 0:00:01.585 *********** 2025-05-06 01:10:08.042121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-06 01:10:08.042139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-06 01:10:08.042163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-06 01:10:08.042179 | orchestrator | 2025-05-06 01:10:08.042193 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-05-06 01:10:08.042206 | orchestrator | Tuesday 06 May 2025 01:08:21 +0000 (0:00:00.859) 0:00:02.444 *********** 2025-05-06 01:10:08.042220 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-05-06 01:10:08.042234 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-05-06 01:10:08.042247 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-06 01:10:08.042261 | orchestrator | 2025-05-06 01:10:08.042275 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-06 01:10:08.042294 | orchestrator | Tuesday 06 May 2025 01:08:22 +0000 (0:00:00.491) 0:00:02.935 *********** 2025-05-06 01:10:08.042308 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:10:08.042322 | orchestrator | 2025-05-06 01:10:08.042336 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-05-06 01:10:08.042348 | orchestrator | Tuesday 06 May 2025 01:08:22 +0000 (0:00:00.550) 0:00:03.486 *********** 2025-05-06 01:10:08.042395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-06 01:10:08.042411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-06 01:10:08.042424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-06 01:10:08.042444 | orchestrator | 2025-05-06 01:10:08.042456 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-05-06 01:10:08.042469 | orchestrator | Tuesday 06 May 2025 01:08:24 +0000 (0:00:01.494) 0:00:04.980 *********** 2025-05-06 01:10:08.042481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-06 01:10:08.042495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-06 01:10:08.042507 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:10:08.042520 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:10:08.042560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-06 01:10:08.042575 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:10:08.042587 | orchestrator | 2025-05-06 01:10:08.042600 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-05-06 01:10:08.042612 | orchestrator | Tuesday 06 May 2025 01:08:24 +0000 (0:00:00.473) 0:00:05.453 *********** 2025-05-06 01:10:08.042625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-06 01:10:08.042656 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:10:08.042669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-06 01:10:08.042682 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:10:08.042695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-06 01:10:08.042707 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:10:08.042719 | orchestrator | 2025-05-06 01:10:08.042749 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-05-06 01:10:08.042762 | orchestrator | Tuesday 06 May 2025 01:08:25 +0000 (0:00:00.681) 0:00:06.135 *********** 2025-05-06 01:10:08.042775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-06 01:10:08.042788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-06 01:10:08.042830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-06 01:10:08.042845 | orchestrator | 2025-05-06 01:10:08.042865 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-05-06 01:10:08.042877 | orchestrator | Tuesday 06 May 2025 01:08:26 +0000 (0:00:01.322) 0:00:07.457 *********** 2025-05-06 01:10:08.042890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-06 01:10:08.042903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-06 01:10:08.042916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-06 01:10:08.042929 | orchestrator | 2025-05-06 01:10:08.042941 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-05-06 01:10:08.042953 | orchestrator | Tuesday 06 May 2025 01:08:28 +0000 (0:00:01.647) 0:00:09.105 *********** 2025-05-06 01:10:08.042965 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:10:08.042978 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:10:08.042990 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:10:08.043074 | orchestrator | 2025-05-06 01:10:08.043088 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-05-06 01:10:08.043100 | orchestrator | Tuesday 06 May 2025 01:08:28 +0000 (0:00:00.269) 0:00:09.374 *********** 2025-05-06 01:10:08.043113 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-06 01:10:08.043125 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-06 01:10:08.043138 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-06 01:10:08.043150 | orchestrator | 2025-05-06 01:10:08.043162 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-05-06 01:10:08.043174 | orchestrator | Tuesday 06 May 2025 01:08:30 +0000 (0:00:01.665) 0:00:11.040 *********** 2025-05-06 01:10:08.043187 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-06 01:10:08.043199 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-06 01:10:08.043245 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-06 01:10:08.043267 | orchestrator | 2025-05-06 01:10:08.043280 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-05-06 01:10:08.043292 | orchestrator | Tuesday 06 May 2025 01:08:31 +0000 (0:00:01.386) 0:00:12.427 *********** 2025-05-06 01:10:08.043305 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-06 01:10:08.043317 | orchestrator | 2025-05-06 01:10:08.043329 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-05-06 01:10:08.043342 | orchestrator | Tuesday 06 May 2025 01:08:31 +0000 (0:00:00.446) 0:00:12.874 *********** 2025-05-06 01:10:08.043354 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-05-06 01:10:08.043366 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-05-06 01:10:08.043379 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:10:08.043391 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:10:08.043404 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:10:08.043416 | orchestrator | 2025-05-06 01:10:08.043428 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-05-06 01:10:08.043441 | orchestrator | Tuesday 06 May 2025 01:08:32 +0000 (0:00:00.827) 0:00:13.701 *********** 2025-05-06 01:10:08.043453 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:10:08.043465 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:10:08.043478 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:10:08.043490 | orchestrator | 2025-05-06 01:10:08.043503 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-05-06 01:10:08.043515 | orchestrator | Tuesday 06 May 2025 01:08:33 +0000 (0:00:00.388) 0:00:14.090 *********** 2025-05-06 01:10:08.043528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1336910, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9389157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1336910, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9389157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1336910, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9389157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1336889, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9309158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1336889, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9309158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1336889, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9309158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1336882, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9289157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1336882, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9289157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1336882, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9289157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1336898, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9349158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1336898, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9349158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1336898, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9349158, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1336875, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9259157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1336875, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9259157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1336875, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9259157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1336884, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9289157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1336884, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9289157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1336884, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9289157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1336895, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9339159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1336895, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9339159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1336895, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9339159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1336873, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9239156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1336873, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9239156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1336873, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9239156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1336778, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.8929152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1336778, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.8929152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.043995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1336778, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.8929152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1336876, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9269156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1336876, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9269156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1336876, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9269156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1336863, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9209156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1336863, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9209156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1336863, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9209156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1336891, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9329157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1336891, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9329157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1336891, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9329157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1336878, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9279156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1336878, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9279156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1336878, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9279156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1336907, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9369159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1336907, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9369159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1336907, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9369159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1336869, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9239156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1336869, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9239156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1336869, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9239156, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1336886, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9299157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1336886, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9299157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1336886, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9299157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1336859, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9169154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1336859, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9169154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1336859, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9169154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1336864, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9229157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1336864, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9229157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1336864, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9229157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1336879, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9289157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1336879, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9289157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1336879, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9289157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1336997, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9759164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1336997, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9759164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1336997, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9759164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1336988, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9659162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1336988, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9659162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1336988, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9659162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1337059, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9819164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1337059, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9819164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1337059, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9819164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1336931, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.953916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1336931, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.953916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1336931, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.953916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1337068, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9839165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1337068, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9839165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1337068, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9839165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1337035, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9769163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1337035, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9769163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1337035, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9769163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1337040, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9779165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1337040, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9779165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1337040, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9779165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1336936, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.953916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1336936, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.953916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1336936, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.953916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1336993, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9669163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1336993, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9669163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1336993, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9669163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1337080, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9849164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1337080, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9849164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1337080, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9849164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1337047, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9799163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1337047, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9799163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.044990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1337047, 'dev': 162, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1746490537.9799163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.045000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1336946, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9589162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.045010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1336946, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9589162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.045025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1336946, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9589162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.045035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1336941, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9559162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.045054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1336941, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9559162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.045064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1336941, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9559162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.045075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1336962, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9609163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.045085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1336962, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9609163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.045096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1336969, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9649162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.045111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1336962, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9609163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.045122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1336969, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9649162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.045137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1336969, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9649162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.045148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1337089, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9859166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.045158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1337089, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9859166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.045168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1337089, 'dev': 162, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1746490537.9859166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-06 01:10:08.045179 | orchestrator | 2025-05-06 01:10:08.045189 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-05-06 01:10:08.045199 | orchestrator | Tuesday 06 May 2025 01:09:06 +0000 (0:00:33.320) 0:00:47.410 *********** 2025-05-06 01:10:08.045215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-06 01:10:08.045230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-06 01:10:08.045241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-06 01:10:08.045251 | orchestrator | 2025-05-06 01:10:08.045262 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-05-06 01:10:08.045272 | orchestrator | Tuesday 06 May 2025 01:09:07 +0000 (0:00:01.048) 0:00:48.459 *********** 2025-05-06 01:10:08.045282 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:10:08.045292 | orchestrator | 2025-05-06 01:10:08.045302 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-05-06 01:10:08.045312 | orchestrator | Tuesday 06 May 2025 01:09:10 +0000 (0:00:02.547) 0:00:51.006 *********** 2025-05-06 01:10:08.045322 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:10:08.045332 | orchestrator | 2025-05-06 01:10:08.045348 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-06 01:10:08.045359 | orchestrator | Tuesday 06 May 2025 01:09:12 +0000 (0:00:02.480) 0:00:53.487 *********** 2025-05-06 01:10:08.045369 | orchestrator | 2025-05-06 01:10:08.045379 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-06 01:10:08.045389 | orchestrator | Tuesday 06 May 2025 01:09:12 +0000 (0:00:00.075) 0:00:53.563 *********** 2025-05-06 01:10:08.045399 | orchestrator | 2025-05-06 01:10:08.045409 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-06 01:10:08.045419 | orchestrator | Tuesday 06 May 2025 01:09:12 +0000 (0:00:00.063) 0:00:53.626 *********** 2025-05-06 01:10:08.045429 | orchestrator | 2025-05-06 01:10:08.045439 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-05-06 01:10:08.045449 | orchestrator | Tuesday 06 May 2025 01:09:12 +0000 (0:00:00.235) 0:00:53.861 *********** 2025-05-06 01:10:08.045459 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:10:08.045469 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:10:08.045479 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:10:08.045489 | orchestrator | 2025-05-06 01:10:08.045499 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-05-06 01:10:08.045509 | orchestrator | Tuesday 06 May 2025 01:09:14 +0000 (0:00:01.893) 0:00:55.755 *********** 2025-05-06 01:10:08.045523 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:10:08.045534 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:10:08.045544 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-05-06 01:10:08.045554 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-05-06 01:10:08.045572 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:10:08.045582 | orchestrator | 2025-05-06 01:10:08.045593 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-05-06 01:10:08.045603 | orchestrator | Tuesday 06 May 2025 01:09:41 +0000 (0:00:26.868) 0:01:22.624 *********** 2025-05-06 01:10:08.045612 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:10:08.045622 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:10:08.045633 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:10:08.045643 | orchestrator | 2025-05-06 01:10:08.045653 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-05-06 01:10:08.045663 | orchestrator | Tuesday 06 May 2025 01:10:00 +0000 (0:00:18.900) 0:01:41.524 *********** 2025-05-06 01:10:08.045673 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:10:08.045683 | orchestrator | 2025-05-06 01:10:08.045693 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-05-06 01:10:08.045706 | orchestrator | Tuesday 06 May 2025 01:10:02 +0000 (0:00:02.370) 0:01:43.894 *********** 2025-05-06 01:10:11.084325 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:10:11.084456 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:10:11.084477 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:10:11.084493 | orchestrator | 2025-05-06 01:10:11.084509 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-05-06 01:10:11.084525 | orchestrator | Tuesday 06 May 2025 01:10:03 +0000 (0:00:00.389) 0:01:44.284 *********** 2025-05-06 01:10:11.084541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-05-06 01:10:11.084561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-05-06 01:10:11.084578 | orchestrator | 2025-05-06 01:10:11.084592 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-05-06 01:10:11.084606 | orchestrator | Tuesday 06 May 2025 01:10:05 +0000 (0:00:02.616) 0:01:46.900 *********** 2025-05-06 01:10:11.084620 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:10:11.084634 | orchestrator | 2025-05-06 01:10:11.084648 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 01:10:11.084663 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-06 01:10:11.084678 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-06 01:10:11.084693 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-06 01:10:11.084706 | orchestrator | 2025-05-06 01:10:11.084774 | orchestrator | 2025-05-06 01:10:11.084790 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 01:10:11.084804 | orchestrator | Tuesday 06 May 2025 01:10:06 +0000 (0:00:00.364) 0:01:47.265 *********** 2025-05-06 01:10:11.084818 | orchestrator | =============================================================================== 2025-05-06 01:10:11.084831 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 33.32s 2025-05-06 01:10:11.084847 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.87s 2025-05-06 01:10:11.084864 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 18.90s 2025-05-06 01:10:11.084880 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.62s 2025-05-06 01:10:11.084928 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.55s 2025-05-06 01:10:11.084945 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.48s 2025-05-06 01:10:11.084961 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.37s 2025-05-06 01:10:11.084994 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.89s 2025-05-06 01:10:11.085011 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.67s 2025-05-06 01:10:11.085027 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.65s 2025-05-06 01:10:11.085043 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.49s 2025-05-06 01:10:11.085059 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.39s 2025-05-06 01:10:11.085075 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.32s 2025-05-06 01:10:11.085090 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.05s 2025-05-06 01:10:11.085107 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.86s 2025-05-06 01:10:11.085122 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.83s 2025-05-06 01:10:11.085139 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.69s 2025-05-06 01:10:11.085155 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.68s 2025-05-06 01:10:11.085171 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.55s 2025-05-06 01:10:11.085187 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.49s 2025-05-06 01:10:11.085203 | orchestrator | 2025-05-06 01:10:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:10:11.085235 | orchestrator | 2025-05-06 01:10:11 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:10:11.086862 | orchestrator | 2025-05-06 01:10:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:10:14.138572 | orchestrator | 2025-05-06 01:10:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:10:14.138811 | orchestrator | 2025-05-06 01:10:14 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:10:17.194941 | orchestrator | 2025-05-06 01:10:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:10:17.195076 | orchestrator | 2025-05-06 01:10:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:10:17.195116 | orchestrator | 2025-05-06 01:10:17 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:10:17.195973 | orchestrator | 2025-05-06 01:10:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:10:17.196280 | orchestrator | 2025-05-06 01:10:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:10:20.247431 | orchestrator | 2025-05-06 01:10:20 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:10:20.248264 | orchestrator | 2025-05-06 01:10:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:10:23.294945 | orchestrator | 2025-05-06 01:10:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:10:23.295125 | orchestrator | 2025-05-06 01:10:23 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:10:23.295989 | orchestrator | 2025-05-06 01:10:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:10:26.340369 | orchestrator | 2025-05-06 01:10:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:10:26.340528 | orchestrator | 2025-05-06 01:10:26 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:10:29.385539 | orchestrator | 2025-05-06 01:10:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:10:29.385673 | orchestrator | 2025-05-06 01:10:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:10:29.385738 | orchestrator | 2025-05-06 01:10:29 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:10:29.387468 | orchestrator | 2025-05-06 01:10:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:10:32.437245 | orchestrator | 2025-05-06 01:10:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:10:32.437389 | orchestrator | 2025-05-06 01:10:32 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:10:32.438615 | orchestrator | 2025-05-06 01:10:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:10:35.498737 | orchestrator | 2025-05-06 01:10:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:10:35.499163 | orchestrator | 2025-05-06 01:10:35 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:10:35.499257 | orchestrator | 2025-05-06 01:10:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:10:35.499281 | orchestrator | 2025-05-06 01:10:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:10:38.552502 | orchestrator | 2025-05-06 01:10:38 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:10:38.552887 | orchestrator | 2025-05-06 01:10:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:10:38.553080 | orchestrator | 2025-05-06 01:10:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:10:41.610406 | orchestrator | 2025-05-06 01:10:41 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:10:44.652223 | orchestrator | 2025-05-06 01:10:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:10:44.652375 | orchestrator | 2025-05-06 01:10:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:10:44.652415 | orchestrator | 2025-05-06 01:10:44 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:10:44.654989 | orchestrator | 2025-05-06 01:10:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:10:47.703454 | orchestrator | 2025-05-06 01:10:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:10:47.703618 | orchestrator | 2025-05-06 01:10:47 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:10:47.704529 | orchestrator | 2025-05-06 01:10:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:10:47.705112 | orchestrator | 2025-05-06 01:10:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:10:50.768918 | orchestrator | 2025-05-06 01:10:50 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:10:50.770148 | orchestrator | 2025-05-06 01:10:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:10:50.770394 | orchestrator | 2025-05-06 01:10:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:10:53.818272 | orchestrator | 2025-05-06 01:10:53 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:10:53.818857 | orchestrator | 2025-05-06 01:10:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:10:56.861160 | orchestrator | 2025-05-06 01:10:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:10:56.861344 | orchestrator | 2025-05-06 01:10:56 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:10:56.862387 | orchestrator | 2025-05-06 01:10:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:10:59.904888 | orchestrator | 2025-05-06 01:10:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:10:59.905042 | orchestrator | 2025-05-06 01:10:59 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:10:59.905959 | orchestrator | 2025-05-06 01:10:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:11:02.957810 | orchestrator | 2025-05-06 01:10:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:11:02.957961 | orchestrator | 2025-05-06 01:11:02 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:11:02.961177 | orchestrator | 2025-05-06 01:11:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:11:06.003787 | orchestrator | 2025-05-06 01:11:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:11:06.003958 | orchestrator | 2025-05-06 01:11:06 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:11:09.073190 | orchestrator | 2025-05-06 01:11:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:11:09.073317 | orchestrator | 2025-05-06 01:11:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:11:09.073357 | orchestrator | 2025-05-06 01:11:09 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:11:09.074121 | orchestrator | 2025-05-06 01:11:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:11:09.074289 | orchestrator | 2025-05-06 01:11:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:11:12.112743 | orchestrator | 2025-05-06 01:11:12 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:11:12.112876 | orchestrator | 2025-05-06 01:11:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:11:15.163157 | orchestrator | 2025-05-06 01:11:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:11:15.163300 | orchestrator | 2025-05-06 01:11:15 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:11:18.214507 | orchestrator | 2025-05-06 01:11:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:11:18.214695 | orchestrator | 2025-05-06 01:11:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:11:18.214737 | orchestrator | 2025-05-06 01:11:18 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:11:21.256408 | orchestrator | 2025-05-06 01:11:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:11:21.256531 | orchestrator | 2025-05-06 01:11:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:11:21.256627 | orchestrator | 2025-05-06 01:11:21 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:11:24.296984 | orchestrator | 2025-05-06 01:11:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:11:24.297105 | orchestrator | 2025-05-06 01:11:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:11:24.297142 | orchestrator | 2025-05-06 01:11:24 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:11:24.297477 | orchestrator | 2025-05-06 01:11:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:11:27.341088 | orchestrator | 2025-05-06 01:11:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:11:27.341240 | orchestrator | 2025-05-06 01:11:27 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:11:27.342111 | orchestrator | 2025-05-06 01:11:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:11:27.342370 | orchestrator | 2025-05-06 01:11:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:11:30.391659 | orchestrator | 2025-05-06 01:11:30 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:11:30.392741 | orchestrator | 2025-05-06 01:11:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:11:30.392881 | orchestrator | 2025-05-06 01:11:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:11:33.433794 | orchestrator | 2025-05-06 01:11:33 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:11:33.436293 | orchestrator | 2025-05-06 01:11:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:11:33.436779 | orchestrator | 2025-05-06 01:11:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:11:36.491382 | orchestrator | 2025-05-06 01:11:36 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:11:36.492250 | orchestrator | 2025-05-06 01:11:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:11:39.542085 | orchestrator | 2025-05-06 01:11:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:11:39.542239 | orchestrator | 2025-05-06 01:11:39 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:11:42.592585 | orchestrator | 2025-05-06 01:11:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:11:42.592717 | orchestrator | 2025-05-06 01:11:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:11:42.592756 | orchestrator | 2025-05-06 01:11:42 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:11:42.594265 | orchestrator | 2025-05-06 01:11:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:11:45.642503 | orchestrator | 2025-05-06 01:11:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:11:45.642739 | orchestrator | 2025-05-06 01:11:45 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:11:45.644041 | orchestrator | 2025-05-06 01:11:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:11:48.689990 | orchestrator | 2025-05-06 01:11:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:11:48.690197 | orchestrator | 2025-05-06 01:11:48 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:11:48.691334 | orchestrator | 2025-05-06 01:11:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:11:51.745736 | orchestrator | 2025-05-06 01:11:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:11:51.745916 | orchestrator | 2025-05-06 01:11:51 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:11:51.746967 | orchestrator | 2025-05-06 01:11:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:11:54.788892 | orchestrator | 2025-05-06 01:11:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:11:54.789069 | orchestrator | 2025-05-06 01:11:54 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:11:54.790802 | orchestrator | 2025-05-06 01:11:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:11:54.791113 | orchestrator | 2025-05-06 01:11:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:11:57.833809 | orchestrator | 2025-05-06 01:11:57 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:11:57.836762 | orchestrator | 2025-05-06 01:11:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:12:00.881550 | orchestrator | 2025-05-06 01:11:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:12:00.881732 | orchestrator | 2025-05-06 01:12:00 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:12:00.882667 | orchestrator | 2025-05-06 01:12:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:12:03.929103 | orchestrator | 2025-05-06 01:12:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:12:03.929250 | orchestrator | 2025-05-06 01:12:03 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:12:03.930391 | orchestrator | 2025-05-06 01:12:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:12:06.973545 | orchestrator | 2025-05-06 01:12:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:12:06.973691 | orchestrator | 2025-05-06 01:12:06 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:12:06.975425 | orchestrator | 2025-05-06 01:12:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:12:10.031357 | orchestrator | 2025-05-06 01:12:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:12:10.031572 | orchestrator | 2025-05-06 01:12:10 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:12:10.032068 | orchestrator | 2025-05-06 01:12:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:12:13.092845 | orchestrator | 2025-05-06 01:12:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:12:13.093031 | orchestrator | 2025-05-06 01:12:13 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:12:13.095820 | orchestrator | 2025-05-06 01:12:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:12:16.143595 | orchestrator | 2025-05-06 01:12:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:12:16.143740 | orchestrator | 2025-05-06 01:12:16 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:12:16.145395 | orchestrator | 2025-05-06 01:12:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:12:19.192450 | orchestrator | 2025-05-06 01:12:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:12:19.192675 | orchestrator | 2025-05-06 01:12:19 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:12:19.193938 | orchestrator | 2025-05-06 01:12:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:12:22.249805 | orchestrator | 2025-05-06 01:12:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:12:22.249980 | orchestrator | 2025-05-06 01:12:22 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:12:22.251434 | orchestrator | 2025-05-06 01:12:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:12:25.294630 | orchestrator | 2025-05-06 01:12:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:12:25.294807 | orchestrator | 2025-05-06 01:12:25 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:12:25.295830 | orchestrator | 2025-05-06 01:12:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:12:25.295943 | orchestrator | 2025-05-06 01:12:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:12:28.354399 | orchestrator | 2025-05-06 01:12:28 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:12:28.355529 | orchestrator | 2025-05-06 01:12:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:12:31.404913 | orchestrator | 2025-05-06 01:12:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:12:31.405095 | orchestrator | 2025-05-06 01:12:31 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:12:31.406158 | orchestrator | 2025-05-06 01:12:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:12:34.468367 | orchestrator | 2025-05-06 01:12:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:12:34.468551 | orchestrator | 2025-05-06 01:12:34 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:12:34.468996 | orchestrator | 2025-05-06 01:12:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:12:34.469410 | orchestrator | 2025-05-06 01:12:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:12:37.515010 | orchestrator | 2025-05-06 01:12:37 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:12:40.563253 | orchestrator | 2025-05-06 01:12:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:12:40.563385 | orchestrator | 2025-05-06 01:12:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:12:40.563456 | orchestrator | 2025-05-06 01:12:40 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:12:40.565041 | orchestrator | 2025-05-06 01:12:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:12:40.565319 | orchestrator | 2025-05-06 01:12:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:12:43.612603 | orchestrator | 2025-05-06 01:12:43 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:12:43.612976 | orchestrator | 2025-05-06 01:12:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:12:46.674406 | orchestrator | 2025-05-06 01:12:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:12:46.674602 | orchestrator | 2025-05-06 01:12:46 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:12:46.677475 | orchestrator | 2025-05-06 01:12:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:12:49.722358 | orchestrator | 2025-05-06 01:12:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:12:49.722602 | orchestrator | 2025-05-06 01:12:49 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:12:49.724052 | orchestrator | 2025-05-06 01:12:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:12:52.770279 | orchestrator | 2025-05-06 01:12:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:12:52.770454 | orchestrator | 2025-05-06 01:12:52 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:12:52.771750 | orchestrator | 2025-05-06 01:12:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:12:52.772118 | orchestrator | 2025-05-06 01:12:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:12:55.820121 | orchestrator | 2025-05-06 01:12:55 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:12:55.820791 | orchestrator | 2025-05-06 01:12:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:12:58.877756 | orchestrator | 2025-05-06 01:12:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:12:58.877940 | orchestrator | 2025-05-06 01:12:58 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:12:58.878833 | orchestrator | 2025-05-06 01:12:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:13:01.938564 | orchestrator | 2025-05-06 01:12:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:13:01.938740 | orchestrator | 2025-05-06 01:13:01 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:13:01.939052 | orchestrator | 2025-05-06 01:13:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:13:01.939087 | orchestrator | 2025-05-06 01:13:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:13:04.994540 | orchestrator | 2025-05-06 01:13:04 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:13:04.995706 | orchestrator | 2025-05-06 01:13:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:13:08.032815 | orchestrator | 2025-05-06 01:13:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:13:08.032930 | orchestrator | 2025-05-06 01:13:08 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:13:08.034434 | orchestrator | 2025-05-06 01:13:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:13:08.034707 | orchestrator | 2025-05-06 01:13:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:13:11.080530 | orchestrator | 2025-05-06 01:13:11 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:13:11.082351 | orchestrator | 2025-05-06 01:13:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:13:14.138748 | orchestrator | 2025-05-06 01:13:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:13:14.138892 | orchestrator | 2025-05-06 01:13:14 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:13:14.139867 | orchestrator | 2025-05-06 01:13:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:13:17.185987 | orchestrator | 2025-05-06 01:13:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:13:17.186200 | orchestrator | 2025-05-06 01:13:17 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:13:17.186970 | orchestrator | 2025-05-06 01:13:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:13:20.234420 | orchestrator | 2025-05-06 01:13:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:13:20.234576 | orchestrator | 2025-05-06 01:13:20 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:13:20.238226 | orchestrator | 2025-05-06 01:13:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:13:23.277882 | orchestrator | 2025-05-06 01:13:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:13:23.278085 | orchestrator | 2025-05-06 01:13:23 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:13:23.278586 | orchestrator | 2025-05-06 01:13:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:13:26.331898 | orchestrator | 2025-05-06 01:13:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:13:26.332039 | orchestrator | 2025-05-06 01:13:26 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:13:26.333818 | orchestrator | 2025-05-06 01:13:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:13:29.384104 | orchestrator | 2025-05-06 01:13:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:13:29.384255 | orchestrator | 2025-05-06 01:13:29 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:13:29.384453 | orchestrator | 2025-05-06 01:13:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:13:32.433415 | orchestrator | 2025-05-06 01:13:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:13:32.433565 | orchestrator | 2025-05-06 01:13:32 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:13:32.434722 | orchestrator | 2025-05-06 01:13:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:13:35.487242 | orchestrator | 2025-05-06 01:13:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:13:35.487498 | orchestrator | 2025-05-06 01:13:35 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:13:35.489893 | orchestrator | 2025-05-06 01:13:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:13:38.537830 | orchestrator | 2025-05-06 01:13:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:13:38.538001 | orchestrator | 2025-05-06 01:13:38 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:13:38.539760 | orchestrator | 2025-05-06 01:13:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:13:41.589861 | orchestrator | 2025-05-06 01:13:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:13:41.590099 | orchestrator | 2025-05-06 01:13:41 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:13:41.591141 | orchestrator | 2025-05-06 01:13:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:13:41.591426 | orchestrator | 2025-05-06 01:13:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:13:44.640625 | orchestrator | 2025-05-06 01:13:44 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:13:44.640975 | orchestrator | 2025-05-06 01:13:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:13:47.675377 | orchestrator | 2025-05-06 01:13:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:13:47.675607 | orchestrator | 2025-05-06 01:13:47 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:13:47.676690 | orchestrator | 2025-05-06 01:13:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:13:47.676730 | orchestrator | 2025-05-06 01:13:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:13:50.710659 | orchestrator | 2025-05-06 01:13:50 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:13:50.711183 | orchestrator | 2025-05-06 01:13:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:13:53.741710 | orchestrator | 2025-05-06 01:13:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:13:53.741968 | orchestrator | 2025-05-06 01:13:53 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:13:53.743945 | orchestrator | 2025-05-06 01:13:53 | INFO  | Task 8d9f06ab-d988-4de0-974b-014aac58dc9d is in state STARTED 2025-05-06 01:13:53.744004 | orchestrator | 2025-05-06 01:13:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:13:56.787139 | orchestrator | 2025-05-06 01:13:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:13:56.787260 | orchestrator | 2025-05-06 01:13:56 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:13:56.788059 | orchestrator | 2025-05-06 01:13:56 | INFO  | Task 8d9f06ab-d988-4de0-974b-014aac58dc9d is in state STARTED 2025-05-06 01:13:56.789206 | orchestrator | 2025-05-06 01:13:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:13:56.789417 | orchestrator | 2025-05-06 01:13:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:13:59.833698 | orchestrator | 2025-05-06 01:13:59 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:13:59.835059 | orchestrator | 2025-05-06 01:13:59 | INFO  | Task 8d9f06ab-d988-4de0-974b-014aac58dc9d is in state STARTED 2025-05-06 01:13:59.836772 | orchestrator | 2025-05-06 01:13:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:14:02.893244 | orchestrator | 2025-05-06 01:13:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:14:02.893484 | orchestrator | 2025-05-06 01:14:02 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:14:02.893713 | orchestrator | 2025-05-06 01:14:02 | INFO  | Task 8d9f06ab-d988-4de0-974b-014aac58dc9d is in state SUCCESS 2025-05-06 01:14:02.895214 | orchestrator | 2025-05-06 01:14:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:14:05.943378 | orchestrator | 2025-05-06 01:14:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:14:05.943540 | orchestrator | 2025-05-06 01:14:05 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:14:05.944423 | orchestrator | 2025-05-06 01:14:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:14:05.944669 | orchestrator | 2025-05-06 01:14:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:14:08.999213 | orchestrator | 2025-05-06 01:14:08 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:14:09.001719 | orchestrator | 2025-05-06 01:14:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:14:12.055976 | orchestrator | 2025-05-06 01:14:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:14:12.056128 | orchestrator | 2025-05-06 01:14:12 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:14:12.057329 | orchestrator | 2025-05-06 01:14:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:14:15.098924 | orchestrator | 2025-05-06 01:14:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:14:15.099124 | orchestrator | 2025-05-06 01:14:15 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:14:15.099857 | orchestrator | 2025-05-06 01:14:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:14:18.150513 | orchestrator | 2025-05-06 01:14:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:14:18.150800 | orchestrator | 2025-05-06 01:14:18 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:14:18.151219 | orchestrator | 2025-05-06 01:14:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:14:18.151259 | orchestrator | 2025-05-06 01:14:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:14:21.215438 | orchestrator | 2025-05-06 01:14:21 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:14:21.216046 | orchestrator | 2025-05-06 01:14:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:14:24.278750 | orchestrator | 2025-05-06 01:14:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:14:24.278894 | orchestrator | 2025-05-06 01:14:24 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state STARTED 2025-05-06 01:14:24.280578 | orchestrator | 2025-05-06 01:14:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:14:24.281179 | orchestrator | 2025-05-06 01:14:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:14:27.346509 | orchestrator | 2025-05-06 01:14:27 | INFO  | Task c305dfe9-234d-443d-8a83-5e3a9637aba4 is in state SUCCESS 2025-05-06 01:14:27.348786 | orchestrator | 2025-05-06 01:14:27.348954 | orchestrator | None 2025-05-06 01:14:27.349132 | orchestrator | 2025-05-06 01:14:27.349146 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-06 01:14:27.349639 | orchestrator | 2025-05-06 01:14:27.349678 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-05-06 01:14:27.349696 | orchestrator | Tuesday 06 May 2025 01:06:01 +0000 (0:00:00.436) 0:00:00.436 *********** 2025-05-06 01:14:27.349714 | orchestrator | changed: [testbed-manager] 2025-05-06 01:14:27.349732 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:14:27.349749 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:14:27.349767 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:14:27.349784 | orchestrator | changed: [testbed-node-3] 2025-05-06 01:14:27.349801 | orchestrator | changed: [testbed-node-4] 2025-05-06 01:14:27.349925 | orchestrator | changed: [testbed-node-5] 2025-05-06 01:14:27.349941 | orchestrator | 2025-05-06 01:14:27.349952 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-06 01:14:27.349962 | orchestrator | Tuesday 06 May 2025 01:06:03 +0000 (0:00:01.635) 0:00:02.071 *********** 2025-05-06 01:14:27.349972 | orchestrator | changed: [testbed-manager] 2025-05-06 01:14:27.349982 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:14:27.349992 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:14:27.350003 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:14:27.350013 | orchestrator | changed: [testbed-node-3] 2025-05-06 01:14:27.350090 | orchestrator | changed: [testbed-node-4] 2025-05-06 01:14:27.350101 | orchestrator | changed: [testbed-node-5] 2025-05-06 01:14:27.350111 | orchestrator | 2025-05-06 01:14:27.350121 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-06 01:14:27.350132 | orchestrator | Tuesday 06 May 2025 01:06:05 +0000 (0:00:01.406) 0:00:03.477 *********** 2025-05-06 01:14:27.350142 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-05-06 01:14:27.350153 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-05-06 01:14:27.350163 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-05-06 01:14:27.350174 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-05-06 01:14:27.350184 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-05-06 01:14:27.350194 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-05-06 01:14:27.350204 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-05-06 01:14:27.350214 | orchestrator | 2025-05-06 01:14:27.350224 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-05-06 01:14:27.350234 | orchestrator | 2025-05-06 01:14:27.350245 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-06 01:14:27.350320 | orchestrator | Tuesday 06 May 2025 01:06:06 +0000 (0:00:00.981) 0:00:04.459 *********** 2025-05-06 01:14:27.350338 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:14:27.350349 | orchestrator | 2025-05-06 01:14:27.350359 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-05-06 01:14:27.350369 | orchestrator | Tuesday 06 May 2025 01:06:06 +0000 (0:00:00.861) 0:00:05.320 *********** 2025-05-06 01:14:27.350380 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-05-06 01:14:27.350391 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-05-06 01:14:27.350401 | orchestrator | 2025-05-06 01:14:27.350411 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-05-06 01:14:27.350421 | orchestrator | Tuesday 06 May 2025 01:06:11 +0000 (0:00:04.488) 0:00:09.808 *********** 2025-05-06 01:14:27.350431 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-06 01:14:27.350442 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-06 01:14:27.350452 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:14:27.350462 | orchestrator | 2025-05-06 01:14:27.350472 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-06 01:14:27.350482 | orchestrator | Tuesday 06 May 2025 01:06:16 +0000 (0:00:04.648) 0:00:14.457 *********** 2025-05-06 01:14:27.350492 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:14:27.350502 | orchestrator | 2025-05-06 01:14:27.350514 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-05-06 01:14:27.350525 | orchestrator | Tuesday 06 May 2025 01:06:16 +0000 (0:00:00.709) 0:00:15.166 *********** 2025-05-06 01:14:27.350537 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:14:27.350548 | orchestrator | 2025-05-06 01:14:27.350560 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-05-06 01:14:27.350577 | orchestrator | Tuesday 06 May 2025 01:06:18 +0000 (0:00:01.711) 0:00:16.878 *********** 2025-05-06 01:14:27.350589 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:14:27.350600 | orchestrator | 2025-05-06 01:14:27.350611 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-06 01:14:27.350623 | orchestrator | Tuesday 06 May 2025 01:06:21 +0000 (0:00:02.766) 0:00:19.644 *********** 2025-05-06 01:14:27.350635 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.350646 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.350658 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.350670 | orchestrator | 2025-05-06 01:14:27.350682 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-06 01:14:27.350694 | orchestrator | Tuesday 06 May 2025 01:06:22 +0000 (0:00:01.089) 0:00:20.734 *********** 2025-05-06 01:14:27.350705 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:14:27.350717 | orchestrator | 2025-05-06 01:14:27.350729 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-05-06 01:14:27.350740 | orchestrator | Tuesday 06 May 2025 01:06:52 +0000 (0:00:30.015) 0:00:50.750 *********** 2025-05-06 01:14:27.350752 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:14:27.350763 | orchestrator | 2025-05-06 01:14:27.350774 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-06 01:14:27.350786 | orchestrator | Tuesday 06 May 2025 01:07:06 +0000 (0:00:13.772) 0:01:04.522 *********** 2025-05-06 01:14:27.350798 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:14:27.350809 | orchestrator | 2025-05-06 01:14:27.350820 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-06 01:14:27.350831 | orchestrator | Tuesday 06 May 2025 01:07:16 +0000 (0:00:10.300) 0:01:14.823 *********** 2025-05-06 01:14:27.350852 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:14:27.350862 | orchestrator | 2025-05-06 01:14:27.350872 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-05-06 01:14:27.350882 | orchestrator | Tuesday 06 May 2025 01:07:17 +0000 (0:00:00.867) 0:01:15.690 *********** 2025-05-06 01:14:27.350892 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.350910 | orchestrator | 2025-05-06 01:14:27.350920 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-06 01:14:27.350930 | orchestrator | Tuesday 06 May 2025 01:07:17 +0000 (0:00:00.721) 0:01:16.411 *********** 2025-05-06 01:14:27.350941 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:14:27.350951 | orchestrator | 2025-05-06 01:14:27.350961 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-06 01:14:27.350971 | orchestrator | Tuesday 06 May 2025 01:07:18 +0000 (0:00:00.831) 0:01:17.243 *********** 2025-05-06 01:14:27.350981 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:14:27.350991 | orchestrator | 2025-05-06 01:14:27.351001 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-06 01:14:27.351011 | orchestrator | Tuesday 06 May 2025 01:07:35 +0000 (0:00:17.096) 0:01:34.340 *********** 2025-05-06 01:14:27.351021 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.351031 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.351041 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.351051 | orchestrator | 2025-05-06 01:14:27.351061 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-05-06 01:14:27.351071 | orchestrator | 2025-05-06 01:14:27.351081 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-06 01:14:27.351091 | orchestrator | Tuesday 06 May 2025 01:07:36 +0000 (0:00:00.318) 0:01:34.658 *********** 2025-05-06 01:14:27.351101 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:14:27.351111 | orchestrator | 2025-05-06 01:14:27.351121 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-05-06 01:14:27.351206 | orchestrator | Tuesday 06 May 2025 01:07:37 +0000 (0:00:00.855) 0:01:35.513 *********** 2025-05-06 01:14:27.351217 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.351227 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.351237 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:14:27.351247 | orchestrator | 2025-05-06 01:14:27.351301 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-05-06 01:14:27.351315 | orchestrator | Tuesday 06 May 2025 01:07:39 +0000 (0:00:02.395) 0:01:37.909 *********** 2025-05-06 01:14:27.351325 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.351335 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.351345 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:14:27.351355 | orchestrator | 2025-05-06 01:14:27.351365 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-06 01:14:27.351375 | orchestrator | Tuesday 06 May 2025 01:07:42 +0000 (0:00:02.553) 0:01:40.463 *********** 2025-05-06 01:14:27.351385 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.351395 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.351404 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.351414 | orchestrator | 2025-05-06 01:14:27.351424 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-06 01:14:27.351434 | orchestrator | Tuesday 06 May 2025 01:07:43 +0000 (0:00:01.315) 0:01:41.778 *********** 2025-05-06 01:14:27.351444 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-06 01:14:27.351454 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.351464 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-06 01:14:27.351474 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.351484 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-06 01:14:27.351494 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-05-06 01:14:27.351504 | orchestrator | 2025-05-06 01:14:27.351514 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-06 01:14:27.351529 | orchestrator | Tuesday 06 May 2025 01:07:52 +0000 (0:00:09.356) 0:01:51.135 *********** 2025-05-06 01:14:27.351540 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.351550 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.351569 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.351579 | orchestrator | 2025-05-06 01:14:27.351589 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-06 01:14:27.351599 | orchestrator | Tuesday 06 May 2025 01:07:52 +0000 (0:00:00.285) 0:01:51.420 *********** 2025-05-06 01:14:27.351609 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-06 01:14:27.351619 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.351629 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-06 01:14:27.351639 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.351648 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-06 01:14:27.351658 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.351668 | orchestrator | 2025-05-06 01:14:27.351678 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-06 01:14:27.351688 | orchestrator | Tuesday 06 May 2025 01:07:53 +0000 (0:00:00.818) 0:01:52.239 *********** 2025-05-06 01:14:27.351698 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.351708 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.351718 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:14:27.351728 | orchestrator | 2025-05-06 01:14:27.351738 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-05-06 01:14:27.351748 | orchestrator | Tuesday 06 May 2025 01:07:54 +0000 (0:00:00.461) 0:01:52.700 *********** 2025-05-06 01:14:27.351758 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.351768 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.351778 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:14:27.351788 | orchestrator | 2025-05-06 01:14:27.351798 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-05-06 01:14:27.351808 | orchestrator | Tuesday 06 May 2025 01:07:55 +0000 (0:00:01.021) 0:01:53.722 *********** 2025-05-06 01:14:27.351818 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.351837 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.351848 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:14:27.351858 | orchestrator | 2025-05-06 01:14:27.351868 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-05-06 01:14:27.351878 | orchestrator | Tuesday 06 May 2025 01:07:57 +0000 (0:00:01.889) 0:01:55.611 *********** 2025-05-06 01:14:27.351888 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.351898 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.351908 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:14:27.351919 | orchestrator | 2025-05-06 01:14:27.351929 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-06 01:14:27.351939 | orchestrator | Tuesday 06 May 2025 01:08:17 +0000 (0:00:20.053) 0:02:15.665 *********** 2025-05-06 01:14:27.351948 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.351958 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.351968 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:14:27.351983 | orchestrator | 2025-05-06 01:14:27.351993 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-06 01:14:27.352004 | orchestrator | Tuesday 06 May 2025 01:08:28 +0000 (0:00:10.864) 0:02:26.530 *********** 2025-05-06 01:14:27.352013 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:14:27.352023 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.352033 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.352043 | orchestrator | 2025-05-06 01:14:27.352053 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-05-06 01:14:27.352063 | orchestrator | Tuesday 06 May 2025 01:08:29 +0000 (0:00:01.244) 0:02:27.774 *********** 2025-05-06 01:14:27.352073 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.352083 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.352093 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:14:27.352103 | orchestrator | 2025-05-06 01:14:27.352118 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-05-06 01:14:27.352133 | orchestrator | Tuesday 06 May 2025 01:08:40 +0000 (0:00:11.137) 0:02:38.912 *********** 2025-05-06 01:14:27.352144 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.352154 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.352164 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.352174 | orchestrator | 2025-05-06 01:14:27.352183 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-06 01:14:27.352193 | orchestrator | Tuesday 06 May 2025 01:08:41 +0000 (0:00:01.499) 0:02:40.411 *********** 2025-05-06 01:14:27.352203 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.352213 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.352223 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.352233 | orchestrator | 2025-05-06 01:14:27.352243 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-05-06 01:14:27.352253 | orchestrator | 2025-05-06 01:14:27.352285 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-06 01:14:27.352295 | orchestrator | Tuesday 06 May 2025 01:08:42 +0000 (0:00:00.523) 0:02:40.934 *********** 2025-05-06 01:14:27.352305 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:14:27.352317 | orchestrator | 2025-05-06 01:14:27.352327 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-05-06 01:14:27.352337 | orchestrator | Tuesday 06 May 2025 01:08:43 +0000 (0:00:00.863) 0:02:41.798 *********** 2025-05-06 01:14:27.352347 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-05-06 01:14:27.352356 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-05-06 01:14:27.352366 | orchestrator | 2025-05-06 01:14:27.352376 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-05-06 01:14:27.352386 | orchestrator | Tuesday 06 May 2025 01:08:46 +0000 (0:00:03.444) 0:02:45.242 *********** 2025-05-06 01:14:27.352396 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-05-06 01:14:27.352408 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-05-06 01:14:27.352418 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-05-06 01:14:27.352429 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-05-06 01:14:27.352439 | orchestrator | 2025-05-06 01:14:27.352449 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-05-06 01:14:27.352459 | orchestrator | Tuesday 06 May 2025 01:08:53 +0000 (0:00:07.051) 0:02:52.294 *********** 2025-05-06 01:14:27.352472 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-06 01:14:27.352488 | orchestrator | 2025-05-06 01:14:27.352505 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-05-06 01:14:27.352522 | orchestrator | Tuesday 06 May 2025 01:08:57 +0000 (0:00:03.403) 0:02:55.697 *********** 2025-05-06 01:14:27.352538 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-06 01:14:27.352555 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-05-06 01:14:27.352578 | orchestrator | 2025-05-06 01:14:27.352594 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-05-06 01:14:27.352609 | orchestrator | Tuesday 06 May 2025 01:09:01 +0000 (0:00:04.181) 0:02:59.879 *********** 2025-05-06 01:14:27.352626 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-06 01:14:27.352642 | orchestrator | 2025-05-06 01:14:27.352658 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-05-06 01:14:27.352675 | orchestrator | Tuesday 06 May 2025 01:09:04 +0000 (0:00:03.247) 0:03:03.126 *********** 2025-05-06 01:14:27.352691 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-05-06 01:14:27.352709 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-05-06 01:14:27.352726 | orchestrator | 2025-05-06 01:14:27.352737 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-06 01:14:27.352754 | orchestrator | Tuesday 06 May 2025 01:09:12 +0000 (0:00:08.292) 0:03:11.419 *********** 2025-05-06 01:14:27.352768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-06 01:14:27.352783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-06 01:14:27.352795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.352807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.352831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.352842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.352853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-06 01:14:27.352865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.352876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.352886 | orchestrator | 2025-05-06 01:14:27.352896 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-05-06 01:14:27.352907 | orchestrator | Tuesday 06 May 2025 01:09:14 +0000 (0:00:01.703) 0:03:13.122 *********** 2025-05-06 01:14:27.352917 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.352933 | orchestrator | 2025-05-06 01:14:27.352943 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-05-06 01:14:27.352954 | orchestrator | Tuesday 06 May 2025 01:09:14 +0000 (0:00:00.123) 0:03:13.246 *********** 2025-05-06 01:14:27.352964 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.352974 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.352984 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.352994 | orchestrator | 2025-05-06 01:14:27.353004 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-05-06 01:14:27.353014 | orchestrator | Tuesday 06 May 2025 01:09:15 +0000 (0:00:00.438) 0:03:13.684 *********** 2025-05-06 01:14:27.353024 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-06 01:14:27.353034 | orchestrator | 2025-05-06 01:14:27.353048 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-05-06 01:14:27.353059 | orchestrator | Tuesday 06 May 2025 01:09:15 +0000 (0:00:00.347) 0:03:14.031 *********** 2025-05-06 01:14:27.353069 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.353079 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.353089 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.353099 | orchestrator | 2025-05-06 01:14:27.353109 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-06 01:14:27.353119 | orchestrator | Tuesday 06 May 2025 01:09:15 +0000 (0:00:00.277) 0:03:14.309 *********** 2025-05-06 01:14:27.353129 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:14:27.353139 | orchestrator | 2025-05-06 01:14:27.353149 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-06 01:14:27.353159 | orchestrator | Tuesday 06 May 2025 01:09:16 +0000 (0:00:00.731) 0:03:15.041 *********** 2025-05-06 01:14:27.353170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-06 01:14:27.353181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-06 01:14:27.353207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-06 01:14:27.353219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.353230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.353241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.353251 | orchestrator | 2025-05-06 01:14:27.353379 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-06 01:14:27.353404 | orchestrator | Tuesday 06 May 2025 01:09:19 +0000 (0:00:02.470) 0:03:17.511 *********** 2025-05-06 01:14:27.353414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-06 01:14:27.353438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.353455 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.353464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-06 01:14:27.353474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.353483 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.353492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-06 01:14:27.353507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.353516 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.353525 | orchestrator | 2025-05-06 01:14:27.353533 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-06 01:14:27.353542 | orchestrator | Tuesday 06 May 2025 01:09:19 +0000 (0:00:00.712) 0:03:18.224 *********** 2025-05-06 01:14:27.353586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-06 01:14:27.353597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.353606 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.353615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-06 01:14:27.353630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.353639 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.353655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-06 01:14:27.353673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.353682 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.353690 | orchestrator | 2025-05-06 01:14:27.353699 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-05-06 01:14:27.353708 | orchestrator | Tuesday 06 May 2025 01:09:20 +0000 (0:00:01.061) 0:03:19.285 *********** 2025-05-06 01:14:27.353717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-06 01:14:27.353731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-06 01:14:27.353753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-06 01:14:27.353763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.353777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.353787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.353796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.353809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.353818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.353827 | orchestrator | 2025-05-06 01:14:27.353836 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-05-06 01:14:27.353844 | orchestrator | Tuesday 06 May 2025 01:09:23 +0000 (0:00:02.731) 0:03:22.016 *********** 2025-05-06 01:14:27.353860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-06 01:14:27.353874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-06 01:14:27.353889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-06 01:14:27.353905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.353915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.353928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.353937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.353946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.353959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.353968 | orchestrator | 2025-05-06 01:14:27.353977 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-05-06 01:14:27.353986 | orchestrator | Tuesday 06 May 2025 01:09:29 +0000 (0:00:06.052) 0:03:28.069 *********** 2025-05-06 01:14:27.354001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-06 01:14:27.354052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.354065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.354074 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.354084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-06 01:14:27.354109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.354118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.354135 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.354144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-06 01:14:27.354153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.354162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.354171 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.354179 | orchestrator | 2025-05-06 01:14:27.354188 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-05-06 01:14:27.354197 | orchestrator | Tuesday 06 May 2025 01:09:30 +0000 (0:00:00.731) 0:03:28.801 *********** 2025-05-06 01:14:27.354205 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:14:27.354214 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:14:27.354233 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:14:27.354242 | orchestrator | 2025-05-06 01:14:27.354284 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-05-06 01:14:27.354298 | orchestrator | Tuesday 06 May 2025 01:09:32 +0000 (0:00:01.675) 0:03:30.477 *********** 2025-05-06 01:14:27.354312 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.354909 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.354929 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.354938 | orchestrator | 2025-05-06 01:14:27.354947 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-05-06 01:14:27.354956 | orchestrator | Tuesday 06 May 2025 01:09:32 +0000 (0:00:00.425) 0:03:30.902 *********** 2025-05-06 01:14:27.354978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-06 01:14:27.354997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-06 01:14:27.355007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-06 01:14:27.355038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.355053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.355072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.355081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.355091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.355099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.355109 | orchestrator | 2025-05-06 01:14:27.355117 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-06 01:14:27.355154 | orchestrator | Tuesday 06 May 2025 01:09:34 +0000 (0:00:02.004) 0:03:32.907 *********** 2025-05-06 01:14:27.355163 | orchestrator | 2025-05-06 01:14:27.355172 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-06 01:14:27.355181 | orchestrator | Tuesday 06 May 2025 01:09:34 +0000 (0:00:00.255) 0:03:33.162 *********** 2025-05-06 01:14:27.355189 | orchestrator | 2025-05-06 01:14:27.355198 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-06 01:14:27.355206 | orchestrator | Tuesday 06 May 2025 01:09:34 +0000 (0:00:00.105) 0:03:33.268 *********** 2025-05-06 01:14:27.355220 | orchestrator | 2025-05-06 01:14:27.355354 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-05-06 01:14:27.355364 | orchestrator | Tuesday 06 May 2025 01:09:35 +0000 (0:00:00.243) 0:03:33.512 *********** 2025-05-06 01:14:27.355372 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:14:27.355415 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:14:27.355426 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:14:27.355435 | orchestrator | 2025-05-06 01:14:27.355443 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-05-06 01:14:27.355518 | orchestrator | Tuesday 06 May 2025 01:09:55 +0000 (0:00:20.426) 0:03:53.938 *********** 2025-05-06 01:14:27.355527 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:14:27.355536 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:14:27.355545 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:14:27.355554 | orchestrator | 2025-05-06 01:14:27.355565 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-05-06 01:14:27.355575 | orchestrator | 2025-05-06 01:14:27.355585 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-06 01:14:27.355595 | orchestrator | Tuesday 06 May 2025 01:10:06 +0000 (0:00:10.535) 0:04:04.474 *********** 2025-05-06 01:14:27.355606 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:14:27.355617 | orchestrator | 2025-05-06 01:14:27.356001 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-06 01:14:27.356018 | orchestrator | Tuesday 06 May 2025 01:10:07 +0000 (0:00:01.321) 0:04:05.795 *********** 2025-05-06 01:14:27.356027 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:14:27.356036 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:14:27.356045 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:14:27.356053 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.356062 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.356071 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.356080 | orchestrator | 2025-05-06 01:14:27.356089 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-05-06 01:14:27.356097 | orchestrator | Tuesday 06 May 2025 01:10:07 +0000 (0:00:00.632) 0:04:06.428 *********** 2025-05-06 01:14:27.356106 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.356115 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.356123 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.356132 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 01:14:27.356142 | orchestrator | 2025-05-06 01:14:27.356150 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-06 01:14:27.356159 | orchestrator | Tuesday 06 May 2025 01:10:09 +0000 (0:00:01.123) 0:04:07.551 *********** 2025-05-06 01:14:27.356168 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-05-06 01:14:27.356177 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-05-06 01:14:27.356186 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-05-06 01:14:27.356195 | orchestrator | 2025-05-06 01:14:27.356204 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-06 01:14:27.356213 | orchestrator | Tuesday 06 May 2025 01:10:09 +0000 (0:00:00.601) 0:04:08.152 *********** 2025-05-06 01:14:27.356221 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-05-06 01:14:27.356230 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-05-06 01:14:27.356239 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-05-06 01:14:27.356248 | orchestrator | 2025-05-06 01:14:27.356274 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-06 01:14:27.356285 | orchestrator | Tuesday 06 May 2025 01:10:10 +0000 (0:00:01.269) 0:04:09.421 *********** 2025-05-06 01:14:27.356293 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-05-06 01:14:27.356311 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:14:27.356325 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-05-06 01:14:27.356334 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:14:27.356343 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-05-06 01:14:27.356352 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:14:27.356360 | orchestrator | 2025-05-06 01:14:27.356369 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-05-06 01:14:27.356377 | orchestrator | Tuesday 06 May 2025 01:10:11 +0000 (0:00:00.747) 0:04:10.169 *********** 2025-05-06 01:14:27.356386 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-06 01:14:27.356394 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-06 01:14:27.356402 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.356411 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-06 01:14:27.356423 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-06 01:14:27.356431 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-06 01:14:27.356440 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-06 01:14:27.356448 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-06 01:14:27.356457 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.356466 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-06 01:14:27.356474 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-06 01:14:27.356524 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.356535 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-06 01:14:27.356543 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-06 01:14:27.356552 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-06 01:14:27.356560 | orchestrator | 2025-05-06 01:14:27.356586 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-05-06 01:14:27.356596 | orchestrator | Tuesday 06 May 2025 01:10:12 +0000 (0:00:00.998) 0:04:11.168 *********** 2025-05-06 01:14:27.356605 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.356614 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.356622 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.356631 | orchestrator | changed: [testbed-node-3] 2025-05-06 01:14:27.356639 | orchestrator | changed: [testbed-node-4] 2025-05-06 01:14:27.356648 | orchestrator | changed: [testbed-node-5] 2025-05-06 01:14:27.356656 | orchestrator | 2025-05-06 01:14:27.356665 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-05-06 01:14:27.356674 | orchestrator | Tuesday 06 May 2025 01:10:13 +0000 (0:00:01.111) 0:04:12.280 *********** 2025-05-06 01:14:27.357040 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.357053 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.357062 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.357070 | orchestrator | changed: [testbed-node-3] 2025-05-06 01:14:27.357079 | orchestrator | changed: [testbed-node-5] 2025-05-06 01:14:27.357087 | orchestrator | changed: [testbed-node-4] 2025-05-06 01:14:27.357096 | orchestrator | 2025-05-06 01:14:27.357104 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-06 01:14:27.357113 | orchestrator | Tuesday 06 May 2025 01:10:16 +0000 (0:00:02.272) 0:04:14.553 *********** 2025-05-06 01:14:27.357122 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-06 01:14:27.357141 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-06 01:14:27.357164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.357230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.357244 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-06 01:14:27.357253 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.357324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.357336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.357346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.357367 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-06 01:14:27.357431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.357444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.357459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.357480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.357508 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-06 01:14:27.357517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.357578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.357591 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.357606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.357615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-06 01:14:27.357624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.357633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.357642 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.357696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.357719 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-06 01:14:27.357734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.357762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.357772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.357781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.357789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-06 01:14:27.357851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.357874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-06 01:14:27.357884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.357892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.357902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.357911 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.357977 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.357991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.358042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.358054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.358063 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.358072 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.358131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.358161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.358171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.358180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.358188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.358197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.358206 | orchestrator | 2025-05-06 01:14:27.358215 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-06 01:14:27.358224 | orchestrator | Tuesday 06 May 2025 01:10:18 +0000 (0:00:02.371) 0:04:16.924 *********** 2025-05-06 01:14:27.358245 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-06 01:14:27.358302 | orchestrator | 2025-05-06 01:14:27.358320 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-06 01:14:27.358329 | orchestrator | Tuesday 06 May 2025 01:10:19 +0000 (0:00:01.305) 0:04:18.229 *********** 2025-05-06 01:14:27.358392 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-06 01:14:27.358416 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-06 01:14:27.358426 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-06 01:14:27.358436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-06 01:14:27.358445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-06 01:14:27.358509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-06 01:14:27.358532 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-06 01:14:27.358541 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-06 01:14:27.358551 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-06 01:14:27.358573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.358583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.358600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.358667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.358681 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.358690 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.358699 | orchestrator | 2025-05-06 01:14:27.358708 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-06 01:14:27.358716 | orchestrator | Tuesday 06 May 2025 01:10:23 +0000 (0:00:03.816) 0:04:22.046 *********** 2025-05-06 01:14:27.358725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.358749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.358805 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.358818 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:14:27.358827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.358850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.358859 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.358877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.358892 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:14:27.358954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.358967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.358976 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:14:27.358985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.358994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.359003 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.359012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.359035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.359045 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.359104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.359117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.359127 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.359135 | orchestrator | 2025-05-06 01:14:27.359156 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-06 01:14:27.359165 | orchestrator | Tuesday 06 May 2025 01:10:25 +0000 (0:00:01.801) 0:04:23.848 *********** 2025-05-06 01:14:27.359174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.359183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.359201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.359236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.359246 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:14:27.359255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.359291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.359307 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:14:27.359321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.359351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.359361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.359370 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:14:27.359403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.359415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.359423 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.359432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.359441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.359455 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.359464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.359480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.359490 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.359511 | orchestrator | 2025-05-06 01:14:27.359520 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-06 01:14:27.359529 | orchestrator | Tuesday 06 May 2025 01:10:27 +0000 (0:00:02.287) 0:04:26.135 *********** 2025-05-06 01:14:27.359537 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.359546 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.359554 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.359563 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-06 01:14:27.359572 | orchestrator | 2025-05-06 01:14:27.359581 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-05-06 01:14:27.359589 | orchestrator | Tuesday 06 May 2025 01:10:28 +0000 (0:00:01.262) 0:04:27.397 *********** 2025-05-06 01:14:27.359618 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-06 01:14:27.359628 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-06 01:14:27.359637 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-06 01:14:27.359645 | orchestrator | 2025-05-06 01:14:27.359654 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-05-06 01:14:27.359662 | orchestrator | Tuesday 06 May 2025 01:10:29 +0000 (0:00:00.860) 0:04:28.258 *********** 2025-05-06 01:14:27.359671 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-06 01:14:27.359679 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-06 01:14:27.359688 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-06 01:14:27.359696 | orchestrator | 2025-05-06 01:14:27.359705 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-05-06 01:14:27.359713 | orchestrator | Tuesday 06 May 2025 01:10:30 +0000 (0:00:00.780) 0:04:29.038 *********** 2025-05-06 01:14:27.359721 | orchestrator | ok: [testbed-node-3] 2025-05-06 01:14:27.359730 | orchestrator | ok: [testbed-node-4] 2025-05-06 01:14:27.359739 | orchestrator | ok: [testbed-node-5] 2025-05-06 01:14:27.359747 | orchestrator | 2025-05-06 01:14:27.359756 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-05-06 01:14:27.359768 | orchestrator | Tuesday 06 May 2025 01:10:31 +0000 (0:00:00.762) 0:04:29.801 *********** 2025-05-06 01:14:27.359779 | orchestrator | ok: [testbed-node-3] 2025-05-06 01:14:27.359789 | orchestrator | ok: [testbed-node-4] 2025-05-06 01:14:27.359799 | orchestrator | ok: [testbed-node-5] 2025-05-06 01:14:27.359809 | orchestrator | 2025-05-06 01:14:27.359819 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-05-06 01:14:27.359840 | orchestrator | Tuesday 06 May 2025 01:10:31 +0000 (0:00:00.307) 0:04:30.108 *********** 2025-05-06 01:14:27.359850 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-06 01:14:27.359860 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-06 01:14:27.359871 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-06 01:14:27.359881 | orchestrator | 2025-05-06 01:14:27.359891 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-05-06 01:14:27.359901 | orchestrator | Tuesday 06 May 2025 01:10:32 +0000 (0:00:01.310) 0:04:31.419 *********** 2025-05-06 01:14:27.359911 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-06 01:14:27.359921 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-06 01:14:27.359931 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-06 01:14:27.359941 | orchestrator | 2025-05-06 01:14:27.359951 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-05-06 01:14:27.359961 | orchestrator | Tuesday 06 May 2025 01:10:34 +0000 (0:00:01.382) 0:04:32.802 *********** 2025-05-06 01:14:27.359972 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-06 01:14:27.359981 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-06 01:14:27.359992 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-06 01:14:27.360002 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-05-06 01:14:27.360015 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-05-06 01:14:27.360026 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-05-06 01:14:27.360036 | orchestrator | 2025-05-06 01:14:27.360046 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-05-06 01:14:27.360056 | orchestrator | Tuesday 06 May 2025 01:10:39 +0000 (0:00:05.636) 0:04:38.439 *********** 2025-05-06 01:14:27.360066 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:14:27.360076 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:14:27.360086 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:14:27.360096 | orchestrator | 2025-05-06 01:14:27.360106 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-05-06 01:14:27.360116 | orchestrator | Tuesday 06 May 2025 01:10:40 +0000 (0:00:00.310) 0:04:38.749 *********** 2025-05-06 01:14:27.360126 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:14:27.360136 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:14:27.360146 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:14:27.360156 | orchestrator | 2025-05-06 01:14:27.360166 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-05-06 01:14:27.360177 | orchestrator | Tuesday 06 May 2025 01:10:40 +0000 (0:00:00.545) 0:04:39.295 *********** 2025-05-06 01:14:27.360187 | orchestrator | changed: [testbed-node-3] 2025-05-06 01:14:27.360197 | orchestrator | changed: [testbed-node-4] 2025-05-06 01:14:27.360214 | orchestrator | changed: [testbed-node-5] 2025-05-06 01:14:27.360225 | orchestrator | 2025-05-06 01:14:27.360235 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-05-06 01:14:27.360245 | orchestrator | Tuesday 06 May 2025 01:10:42 +0000 (0:00:01.424) 0:04:40.719 *********** 2025-05-06 01:14:27.360306 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-06 01:14:27.360325 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-06 01:14:27.360336 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-06 01:14:27.360346 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-06 01:14:27.360356 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-06 01:14:27.360373 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-06 01:14:27.360383 | orchestrator | 2025-05-06 01:14:27.360393 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-05-06 01:14:27.360425 | orchestrator | Tuesday 06 May 2025 01:10:45 +0000 (0:00:03.581) 0:04:44.300 *********** 2025-05-06 01:14:27.360435 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-06 01:14:27.360444 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-06 01:14:27.360453 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-06 01:14:27.360462 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-06 01:14:27.360470 | orchestrator | changed: [testbed-node-3] 2025-05-06 01:14:27.360479 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-06 01:14:27.360488 | orchestrator | changed: [testbed-node-5] 2025-05-06 01:14:27.360496 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-06 01:14:27.360505 | orchestrator | changed: [testbed-node-4] 2025-05-06 01:14:27.360513 | orchestrator | 2025-05-06 01:14:27.360522 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-05-06 01:14:27.360530 | orchestrator | Tuesday 06 May 2025 01:10:49 +0000 (0:00:03.390) 0:04:47.690 *********** 2025-05-06 01:14:27.360539 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:14:27.360547 | orchestrator | 2025-05-06 01:14:27.360556 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-05-06 01:14:27.360564 | orchestrator | Tuesday 06 May 2025 01:10:49 +0000 (0:00:00.121) 0:04:47.811 *********** 2025-05-06 01:14:27.360573 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:14:27.360581 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:14:27.360590 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:14:27.360598 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.360607 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.360615 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.360623 | orchestrator | 2025-05-06 01:14:27.360632 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-05-06 01:14:27.360640 | orchestrator | Tuesday 06 May 2025 01:10:50 +0000 (0:00:00.837) 0:04:48.649 *********** 2025-05-06 01:14:27.360649 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-06 01:14:27.360657 | orchestrator | 2025-05-06 01:14:27.360666 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-05-06 01:14:27.360674 | orchestrator | Tuesday 06 May 2025 01:10:50 +0000 (0:00:00.395) 0:04:49.045 *********** 2025-05-06 01:14:27.360683 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:14:27.360695 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:14:27.360703 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:14:27.360712 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.360721 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.360729 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.360738 | orchestrator | 2025-05-06 01:14:27.360746 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-05-06 01:14:27.360755 | orchestrator | Tuesday 06 May 2025 01:10:51 +0000 (0:00:00.819) 0:04:49.864 *********** 2025-05-06 01:14:27.360764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.360778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.360815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.360827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.360836 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-06 01:14:27.360845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.360860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.360876 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-06 01:14:27.360905 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-06 01:14:27.360916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-06 01:14:27.360924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.360933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-06 01:14:27.360948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.360958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.360985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.360995 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-06 01:14:27.361005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.361021 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.361031 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.361045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.361054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-06 01:14:27.361082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.361092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.361101 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-06 01:14:27.361110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.361126 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.361140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.361149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.361158 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-06 01:14:27.361186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.361196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.361206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.361226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.361244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.361254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.361302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.361313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.361331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.361345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.361354 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.361363 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.361391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.361409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.361418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.361432 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.361441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.361450 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.361478 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.361488 | orchestrator | 2025-05-06 01:14:27.361497 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-05-06 01:14:27.361506 | orchestrator | Tuesday 06 May 2025 01:10:55 +0000 (0:00:03.865) 0:04:53.730 *********** 2025-05-06 01:14:27.361522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.361540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.361549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.361558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.361567 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.361603 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.361614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.361628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.361637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.361646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.361655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.361664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.361693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.361708 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.361717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.361734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.361747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.361756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.361784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.361794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.361816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.361826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.361834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.361844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.361871 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.361888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.361904 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.361914 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.361923 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.361950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.361960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-06 01:14:27.361975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.361984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.361993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-06 01:14:27.362009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.362049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.362081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-06 01:14:27.362098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.362107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.362116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.362125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.362134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.362151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.362181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.362196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.362206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.362221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.362231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.362240 | orchestrator | 2025-05-06 01:14:27.362249 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-05-06 01:14:27.362273 | orchestrator | Tuesday 06 May 2025 01:11:02 +0000 (0:00:07.379) 0:05:01.110 *********** 2025-05-06 01:14:27.362283 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:14:27.362292 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:14:27.362301 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:14:27.362309 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.362317 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.362326 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.362339 | orchestrator | 2025-05-06 01:14:27.362348 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-05-06 01:14:27.362360 | orchestrator | Tuesday 06 May 2025 01:11:04 +0000 (0:00:01.563) 0:05:02.673 *********** 2025-05-06 01:14:27.362369 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-06 01:14:27.362377 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-06 01:14:27.362386 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-06 01:14:27.362394 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-06 01:14:27.362403 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.362432 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-06 01:14:27.362442 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-06 01:14:27.362451 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.362459 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-06 01:14:27.362468 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.362476 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-06 01:14:27.362485 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-06 01:14:27.362493 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-06 01:14:27.362505 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-06 01:14:27.362514 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-06 01:14:27.362523 | orchestrator | 2025-05-06 01:14:27.362531 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-05-06 01:14:27.362540 | orchestrator | Tuesday 06 May 2025 01:11:09 +0000 (0:00:05.232) 0:05:07.906 *********** 2025-05-06 01:14:27.362548 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:14:27.362557 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:14:27.362565 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:14:27.362574 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.362583 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.362591 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.362600 | orchestrator | 2025-05-06 01:14:27.362608 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-05-06 01:14:27.362617 | orchestrator | Tuesday 06 May 2025 01:11:10 +0000 (0:00:00.845) 0:05:08.751 *********** 2025-05-06 01:14:27.362625 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-06 01:14:27.362634 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-06 01:14:27.362642 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-06 01:14:27.362651 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-06 01:14:27.362660 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-06 01:14:27.362668 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-06 01:14:27.362676 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-06 01:14:27.362685 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-06 01:14:27.362693 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.362707 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-06 01:14:27.362715 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-06 01:14:27.362724 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-06 01:14:27.362732 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.362741 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-06 01:14:27.362749 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.362761 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-06 01:14:27.362770 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-06 01:14:27.362778 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-06 01:14:27.362787 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-06 01:14:27.362795 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-06 01:14:27.362804 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-06 01:14:27.362812 | orchestrator | 2025-05-06 01:14:27.362820 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-05-06 01:14:27.362829 | orchestrator | Tuesday 06 May 2025 01:11:17 +0000 (0:00:07.355) 0:05:16.106 *********** 2025-05-06 01:14:27.362837 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-06 01:14:27.362846 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-06 01:14:27.362872 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-06 01:14:27.362883 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-06 01:14:27.362891 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-06 01:14:27.362900 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-06 01:14:27.362908 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-06 01:14:27.362917 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-06 01:14:27.362925 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-06 01:14:27.362933 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-06 01:14:27.362942 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-06 01:14:27.362951 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-06 01:14:27.362959 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.362968 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-06 01:14:27.362976 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-06 01:14:27.362984 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.362993 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-06 01:14:27.363002 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.363010 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-06 01:14:27.363018 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-06 01:14:27.363032 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-06 01:14:27.363040 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-06 01:14:27.363049 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-06 01:14:27.363058 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-06 01:14:27.363066 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-06 01:14:27.363075 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-06 01:14:27.363083 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-06 01:14:27.363091 | orchestrator | 2025-05-06 01:14:27.363100 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-05-06 01:14:27.363108 | orchestrator | Tuesday 06 May 2025 01:11:27 +0000 (0:00:09.445) 0:05:25.552 *********** 2025-05-06 01:14:27.363117 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:14:27.363125 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:14:27.363133 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:14:27.363142 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.363150 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.363158 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.363167 | orchestrator | 2025-05-06 01:14:27.363175 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-05-06 01:14:27.363187 | orchestrator | Tuesday 06 May 2025 01:11:27 +0000 (0:00:00.687) 0:05:26.239 *********** 2025-05-06 01:14:27.363196 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:14:27.363205 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:14:27.363213 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:14:27.363222 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.363230 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.363245 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.363253 | orchestrator | 2025-05-06 01:14:27.363310 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-05-06 01:14:27.363325 | orchestrator | Tuesday 06 May 2025 01:11:28 +0000 (0:00:00.864) 0:05:27.103 *********** 2025-05-06 01:14:27.363334 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.363342 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.363350 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.363359 | orchestrator | changed: [testbed-node-4] 2025-05-06 01:14:27.363367 | orchestrator | changed: [testbed-node-3] 2025-05-06 01:14:27.363375 | orchestrator | changed: [testbed-node-5] 2025-05-06 01:14:27.363384 | orchestrator | 2025-05-06 01:14:27.363392 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-05-06 01:14:27.363401 | orchestrator | Tuesday 06 May 2025 01:11:31 +0000 (0:00:03.124) 0:05:30.228 *********** 2025-05-06 01:14:27.363433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.363444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.363459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.363468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.363477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.363495 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.363505 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.363533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.363548 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:14:27.363557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.363572 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.363581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.363589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.363616 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.363630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.363638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.363654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.363663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.363671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.363679 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:14:27.363687 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.363700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.363713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.363721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.363736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.363744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.363753 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:14:27.363761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.363777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.363786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.363794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.363802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.363811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.363826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.363834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.363847 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.363860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.363869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.363877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.363885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.363897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.363911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.363927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.363936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.363944 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.363952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.363969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.363977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.363989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.364001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.364009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.364018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.364026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.364034 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.364042 | orchestrator | 2025-05-06 01:14:27.364050 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-05-06 01:14:27.364058 | orchestrator | Tuesday 06 May 2025 01:11:33 +0000 (0:00:02.063) 0:05:32.292 *********** 2025-05-06 01:14:27.364066 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-06 01:14:27.364074 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-06 01:14:27.364081 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:14:27.364089 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-06 01:14:27.364097 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-06 01:14:27.364109 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:14:27.364117 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-06 01:14:27.364125 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-06 01:14:27.364133 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:14:27.364141 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-06 01:14:27.364149 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-06 01:14:27.364157 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.364164 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-06 01:14:27.364172 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-06 01:14:27.364180 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.364188 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-06 01:14:27.364196 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-06 01:14:27.364204 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.364211 | orchestrator | 2025-05-06 01:14:27.364219 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-05-06 01:14:27.364227 | orchestrator | Tuesday 06 May 2025 01:11:34 +0000 (0:00:00.790) 0:05:33.082 *********** 2025-05-06 01:14:27.364246 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-06 01:14:27.364255 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-06 01:14:27.364288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.364302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.364319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.364332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.364341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-06 01:14:27.364349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-06 01:14:27.364357 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-06 01:14:27.364370 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-06 01:14:27.364386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.364398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.364406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.364414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.364423 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-06 01:14:27.364435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.364443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.364451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.364469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.364478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-06 01:14:27.364487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.364495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.364508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-06 01:14:27.364516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.364524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.364533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-06 01:14:27.364551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.364560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.364568 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-06 01:14:27.364580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.364589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-06 01:14:27.364597 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-06 01:14:27.364605 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.364617 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.364626 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.364646 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.364655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.364663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.364683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.364692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.364700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.364713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.364721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.364730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.364744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.364756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.364765 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-06 01:14:27.364777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-06 01:14:27.364785 | orchestrator | 2025-05-06 01:14:27.364794 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-06 01:14:27.364802 | orchestrator | Tuesday 06 May 2025 01:11:38 +0000 (0:00:03.639) 0:05:36.721 *********** 2025-05-06 01:14:27.364810 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:14:27.364818 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:14:27.364826 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:14:27.364834 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.364842 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.364850 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.364858 | orchestrator | 2025-05-06 01:14:27.364866 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-06 01:14:27.364874 | orchestrator | Tuesday 06 May 2025 01:11:38 +0000 (0:00:00.705) 0:05:37.427 *********** 2025-05-06 01:14:27.364882 | orchestrator | 2025-05-06 01:14:27.364890 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-06 01:14:27.364898 | orchestrator | Tuesday 06 May 2025 01:11:39 +0000 (0:00:00.272) 0:05:37.699 *********** 2025-05-06 01:14:27.364906 | orchestrator | 2025-05-06 01:14:27.364914 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-06 01:14:27.364922 | orchestrator | Tuesday 06 May 2025 01:11:39 +0000 (0:00:00.107) 0:05:37.807 *********** 2025-05-06 01:14:27.364930 | orchestrator | 2025-05-06 01:14:27.364937 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-06 01:14:27.364945 | orchestrator | Tuesday 06 May 2025 01:11:39 +0000 (0:00:00.293) 0:05:38.101 *********** 2025-05-06 01:14:27.364953 | orchestrator | 2025-05-06 01:14:27.364961 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-06 01:14:27.364969 | orchestrator | Tuesday 06 May 2025 01:11:39 +0000 (0:00:00.111) 0:05:38.212 *********** 2025-05-06 01:14:27.364977 | orchestrator | 2025-05-06 01:14:27.364985 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-06 01:14:27.364993 | orchestrator | Tuesday 06 May 2025 01:11:40 +0000 (0:00:00.269) 0:05:38.481 *********** 2025-05-06 01:14:27.365001 | orchestrator | 2025-05-06 01:14:27.365009 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-05-06 01:14:27.365017 | orchestrator | Tuesday 06 May 2025 01:11:40 +0000 (0:00:00.109) 0:05:38.590 *********** 2025-05-06 01:14:27.365024 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:14:27.365032 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:14:27.365040 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:14:27.365048 | orchestrator | 2025-05-06 01:14:27.365056 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-05-06 01:14:27.365064 | orchestrator | Tuesday 06 May 2025 01:11:52 +0000 (0:00:11.888) 0:05:50.479 *********** 2025-05-06 01:14:27.365076 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:14:27.365084 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:14:27.365092 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:14:27.365100 | orchestrator | 2025-05-06 01:14:27.365108 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-05-06 01:14:27.365116 | orchestrator | Tuesday 06 May 2025 01:12:07 +0000 (0:00:15.583) 0:06:06.062 *********** 2025-05-06 01:14:27.365127 | orchestrator | changed: [testbed-node-5] 2025-05-06 01:14:27.365135 | orchestrator | changed: [testbed-node-3] 2025-05-06 01:14:27.365143 | orchestrator | changed: [testbed-node-4] 2025-05-06 01:14:27.365151 | orchestrator | 2025-05-06 01:14:27.365159 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-05-06 01:14:27.365167 | orchestrator | Tuesday 06 May 2025 01:12:30 +0000 (0:00:22.595) 0:06:28.657 *********** 2025-05-06 01:14:27.365175 | orchestrator | changed: [testbed-node-4] 2025-05-06 01:14:27.365183 | orchestrator | changed: [testbed-node-3] 2025-05-06 01:14:27.365191 | orchestrator | changed: [testbed-node-5] 2025-05-06 01:14:27.365198 | orchestrator | 2025-05-06 01:14:27.365206 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-05-06 01:14:27.365218 | orchestrator | Tuesday 06 May 2025 01:12:56 +0000 (0:00:26.601) 0:06:55.259 *********** 2025-05-06 01:14:27.365226 | orchestrator | changed: [testbed-node-5] 2025-05-06 01:14:27.365234 | orchestrator | changed: [testbed-node-3] 2025-05-06 01:14:27.365242 | orchestrator | changed: [testbed-node-4] 2025-05-06 01:14:27.365250 | orchestrator | 2025-05-06 01:14:27.365297 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-05-06 01:14:27.365307 | orchestrator | Tuesday 06 May 2025 01:12:57 +0000 (0:00:01.033) 0:06:56.292 *********** 2025-05-06 01:14:27.365315 | orchestrator | changed: [testbed-node-3] 2025-05-06 01:14:27.365323 | orchestrator | changed: [testbed-node-4] 2025-05-06 01:14:27.365330 | orchestrator | changed: [testbed-node-5] 2025-05-06 01:14:27.365338 | orchestrator | 2025-05-06 01:14:27.365346 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-05-06 01:14:27.365354 | orchestrator | Tuesday 06 May 2025 01:12:58 +0000 (0:00:00.751) 0:06:57.043 *********** 2025-05-06 01:14:27.365362 | orchestrator | changed: [testbed-node-5] 2025-05-06 01:14:27.365370 | orchestrator | changed: [testbed-node-3] 2025-05-06 01:14:27.365377 | orchestrator | changed: [testbed-node-4] 2025-05-06 01:14:27.365385 | orchestrator | 2025-05-06 01:14:27.365393 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-05-06 01:14:27.365402 | orchestrator | Tuesday 06 May 2025 01:13:21 +0000 (0:00:23.251) 0:07:20.295 *********** 2025-05-06 01:14:27.365410 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:14:27.365418 | orchestrator | 2025-05-06 01:14:27.365426 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-05-06 01:14:27.365433 | orchestrator | Tuesday 06 May 2025 01:13:21 +0000 (0:00:00.117) 0:07:20.412 *********** 2025-05-06 01:14:27.365441 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:14:27.365449 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:14:27.365457 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.365465 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.365476 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.365484 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-05-06 01:14:27.365492 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-06 01:14:27.365501 | orchestrator | 2025-05-06 01:14:27.365508 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-05-06 01:14:27.365516 | orchestrator | Tuesday 06 May 2025 01:13:43 +0000 (0:00:21.891) 0:07:42.304 *********** 2025-05-06 01:14:27.365524 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:14:27.365532 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.365540 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.365552 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:14:27.365560 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:14:27.365568 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.365576 | orchestrator | 2025-05-06 01:14:27.365584 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-05-06 01:14:27.365591 | orchestrator | Tuesday 06 May 2025 01:13:52 +0000 (0:00:08.503) 0:07:50.807 *********** 2025-05-06 01:14:27.365599 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:14:27.365607 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:14:27.365615 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.365623 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.365631 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.365638 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-05-06 01:14:27.365646 | orchestrator | 2025-05-06 01:14:27.365654 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-06 01:14:27.365662 | orchestrator | Tuesday 06 May 2025 01:13:55 +0000 (0:00:03.366) 0:07:54.174 *********** 2025-05-06 01:14:27.365670 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-06 01:14:27.365678 | orchestrator | 2025-05-06 01:14:27.365685 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-06 01:14:27.365693 | orchestrator | Tuesday 06 May 2025 01:14:06 +0000 (0:00:10.517) 0:08:04.692 *********** 2025-05-06 01:14:27.365701 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-06 01:14:27.365709 | orchestrator | 2025-05-06 01:14:27.365717 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-05-06 01:14:27.365725 | orchestrator | Tuesday 06 May 2025 01:14:07 +0000 (0:00:01.066) 0:08:05.758 *********** 2025-05-06 01:14:27.365733 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:14:27.365740 | orchestrator | 2025-05-06 01:14:27.365748 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-05-06 01:14:27.365756 | orchestrator | Tuesday 06 May 2025 01:14:08 +0000 (0:00:01.298) 0:08:07.057 *********** 2025-05-06 01:14:27.365764 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-06 01:14:27.365772 | orchestrator | 2025-05-06 01:14:27.365779 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-05-06 01:14:27.365786 | orchestrator | Tuesday 06 May 2025 01:14:18 +0000 (0:00:09.607) 0:08:16.664 *********** 2025-05-06 01:14:27.365793 | orchestrator | ok: [testbed-node-3] 2025-05-06 01:14:27.365800 | orchestrator | ok: [testbed-node-4] 2025-05-06 01:14:27.365807 | orchestrator | ok: [testbed-node-5] 2025-05-06 01:14:27.365814 | orchestrator | ok: [testbed-node-0] 2025-05-06 01:14:27.365821 | orchestrator | ok: [testbed-node-1] 2025-05-06 01:14:27.365828 | orchestrator | ok: [testbed-node-2] 2025-05-06 01:14:27.365834 | orchestrator | 2025-05-06 01:14:27.365844 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-05-06 01:14:27.365851 | orchestrator | 2025-05-06 01:14:27.365858 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-05-06 01:14:27.365865 | orchestrator | Tuesday 06 May 2025 01:14:20 +0000 (0:00:02.080) 0:08:18.744 *********** 2025-05-06 01:14:27.365872 | orchestrator | changed: [testbed-node-0] 2025-05-06 01:14:27.365879 | orchestrator | changed: [testbed-node-1] 2025-05-06 01:14:27.365886 | orchestrator | changed: [testbed-node-2] 2025-05-06 01:14:27.365893 | orchestrator | 2025-05-06 01:14:27.365900 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-05-06 01:14:27.365906 | orchestrator | 2025-05-06 01:14:27.365913 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-05-06 01:14:27.365920 | orchestrator | Tuesday 06 May 2025 01:14:21 +0000 (0:00:00.928) 0:08:19.672 *********** 2025-05-06 01:14:27.365927 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.365933 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.365940 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.365947 | orchestrator | 2025-05-06 01:14:27.365954 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-05-06 01:14:27.365965 | orchestrator | 2025-05-06 01:14:27.365972 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-05-06 01:14:27.365982 | orchestrator | Tuesday 06 May 2025 01:14:21 +0000 (0:00:00.721) 0:08:20.394 *********** 2025-05-06 01:14:27.365989 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-05-06 01:14:27.365996 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-06 01:14:27.366003 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-06 01:14:27.366010 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-05-06 01:14:27.366039 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-05-06 01:14:27.366046 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-05-06 01:14:27.366053 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-05-06 01:14:27.366060 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-06 01:14:27.366067 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-06 01:14:27.366074 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-05-06 01:14:27.366081 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-05-06 01:14:27.366087 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-05-06 01:14:27.366094 | orchestrator | skipping: [testbed-node-3] 2025-05-06 01:14:27.366102 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-05-06 01:14:27.366108 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-06 01:14:27.366118 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-06 01:14:27.366125 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-05-06 01:14:27.366132 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-05-06 01:14:27.366139 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-05-06 01:14:27.366146 | orchestrator | skipping: [testbed-node-4] 2025-05-06 01:14:27.366153 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-05-06 01:14:27.366160 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-06 01:14:27.366166 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-06 01:14:27.366173 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-05-06 01:14:27.366180 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-05-06 01:14:27.366187 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-05-06 01:14:27.366194 | orchestrator | skipping: [testbed-node-5] 2025-05-06 01:14:27.366201 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-05-06 01:14:27.366207 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-06 01:14:27.366214 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-06 01:14:27.366221 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-05-06 01:14:27.366228 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-05-06 01:14:27.366235 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-05-06 01:14:27.366241 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.366248 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.366255 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-05-06 01:14:27.366278 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-06 01:14:27.366285 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-06 01:14:27.366292 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-05-06 01:14:27.366299 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-05-06 01:14:27.366306 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-05-06 01:14:27.366318 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:27.366325 | orchestrator | 2025-05-06 01:14:27.366331 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-05-06 01:14:27.366338 | orchestrator | 2025-05-06 01:14:27.366345 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-05-06 01:14:27.366352 | orchestrator | Tuesday 06 May 2025 01:14:23 +0000 (0:00:01.318) 0:08:21.713 *********** 2025-05-06 01:14:27.366359 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-05-06 01:14:27.366366 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-05-06 01:14:27.366372 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:27.366379 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-05-06 01:14:27.366386 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-05-06 01:14:27.366393 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:27.366403 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-05-06 01:14:30.405317 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-05-06 01:14:30.405447 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:30.405467 | orchestrator | 2025-05-06 01:14:30.405482 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-05-06 01:14:30.405497 | orchestrator | 2025-05-06 01:14:30.405512 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-05-06 01:14:30.405526 | orchestrator | Tuesday 06 May 2025 01:14:23 +0000 (0:00:00.591) 0:08:22.304 *********** 2025-05-06 01:14:30.405540 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:30.405554 | orchestrator | 2025-05-06 01:14:30.405568 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-05-06 01:14:30.405582 | orchestrator | 2025-05-06 01:14:30.405596 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-05-06 01:14:30.405610 | orchestrator | Tuesday 06 May 2025 01:14:24 +0000 (0:00:00.903) 0:08:23.207 *********** 2025-05-06 01:14:30.405624 | orchestrator | skipping: [testbed-node-0] 2025-05-06 01:14:30.405638 | orchestrator | skipping: [testbed-node-1] 2025-05-06 01:14:30.405651 | orchestrator | skipping: [testbed-node-2] 2025-05-06 01:14:30.405665 | orchestrator | 2025-05-06 01:14:30.405679 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-06 01:14:30.405693 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-06 01:14:30.405709 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-05-06 01:14:30.405723 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-06 01:14:30.405737 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-06 01:14:30.405751 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-06 01:14:30.405766 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-06 01:14:30.405780 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-06 01:14:30.405793 | orchestrator | 2025-05-06 01:14:30.405807 | orchestrator | 2025-05-06 01:14:30.405824 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-06 01:14:30.405840 | orchestrator | Tuesday 06 May 2025 01:14:25 +0000 (0:00:00.507) 0:08:23.715 *********** 2025-05-06 01:14:30.405855 | orchestrator | =============================================================================== 2025-05-06 01:14:30.405901 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.02s 2025-05-06 01:14:30.405918 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 26.60s 2025-05-06 01:14:30.405934 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 23.25s 2025-05-06 01:14:30.405949 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 22.60s 2025-05-06 01:14:30.405965 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.89s 2025-05-06 01:14:30.405981 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 20.43s 2025-05-06 01:14:30.405997 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.05s 2025-05-06 01:14:30.406074 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.10s 2025-05-06 01:14:30.406094 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 15.58s 2025-05-06 01:14:30.406111 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.77s 2025-05-06 01:14:30.406126 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.89s 2025-05-06 01:14:30.406142 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.14s 2025-05-06 01:14:30.406158 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.86s 2025-05-06 01:14:30.406174 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.54s 2025-05-06 01:14:30.406188 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.52s 2025-05-06 01:14:30.406202 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.30s 2025-05-06 01:14:30.406215 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.61s 2025-05-06 01:14:30.406230 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 9.45s 2025-05-06 01:14:30.406278 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.36s 2025-05-06 01:14:30.406294 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.50s 2025-05-06 01:14:30.406309 | orchestrator | 2025-05-06 01:14:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:14:30.406323 | orchestrator | 2025-05-06 01:14:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:14:30.406357 | orchestrator | 2025-05-06 01:14:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:14:33.452601 | orchestrator | 2025-05-06 01:14:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:14:33.452789 | orchestrator | 2025-05-06 01:14:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:14:36.507355 | orchestrator | 2025-05-06 01:14:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:14:36.507516 | orchestrator | 2025-05-06 01:14:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:14:39.555760 | orchestrator | 2025-05-06 01:14:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:14:39.555907 | orchestrator | 2025-05-06 01:14:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:14:42.607461 | orchestrator | 2025-05-06 01:14:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:14:42.607633 | orchestrator | 2025-05-06 01:14:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:14:45.652224 | orchestrator | 2025-05-06 01:14:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:14:45.652429 | orchestrator | 2025-05-06 01:14:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:14:48.693219 | orchestrator | 2025-05-06 01:14:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:14:48.693446 | orchestrator | 2025-05-06 01:14:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:14:51.744219 | orchestrator | 2025-05-06 01:14:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:14:51.744417 | orchestrator | 2025-05-06 01:14:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:14:54.791725 | orchestrator | 2025-05-06 01:14:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:14:54.791875 | orchestrator | 2025-05-06 01:14:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:14:57.835156 | orchestrator | 2025-05-06 01:14:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:14:57.835387 | orchestrator | 2025-05-06 01:14:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:15:00.878503 | orchestrator | 2025-05-06 01:14:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:15:00.878651 | orchestrator | 2025-05-06 01:15:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:15:03.924136 | orchestrator | 2025-05-06 01:15:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:15:03.924315 | orchestrator | 2025-05-06 01:15:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:15:06.972042 | orchestrator | 2025-05-06 01:15:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:15:06.972187 | orchestrator | 2025-05-06 01:15:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:15:10.023810 | orchestrator | 2025-05-06 01:15:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:15:10.023966 | orchestrator | 2025-05-06 01:15:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:15:13.069688 | orchestrator | 2025-05-06 01:15:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:15:13.069842 | orchestrator | 2025-05-06 01:15:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:15:16.117678 | orchestrator | 2025-05-06 01:15:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:15:16.117840 | orchestrator | 2025-05-06 01:15:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:15:19.168369 | orchestrator | 2025-05-06 01:15:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:15:19.168519 | orchestrator | 2025-05-06 01:15:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:15:22.215827 | orchestrator | 2025-05-06 01:15:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:15:22.215974 | orchestrator | 2025-05-06 01:15:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:15:25.267353 | orchestrator | 2025-05-06 01:15:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:15:25.267531 | orchestrator | 2025-05-06 01:15:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:15:28.305890 | orchestrator | 2025-05-06 01:15:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:15:28.306091 | orchestrator | 2025-05-06 01:15:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:15:31.358549 | orchestrator | 2025-05-06 01:15:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:15:31.358661 | orchestrator | 2025-05-06 01:15:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:15:34.400956 | orchestrator | 2025-05-06 01:15:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:15:34.401139 | orchestrator | 2025-05-06 01:15:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:15:37.449599 | orchestrator | 2025-05-06 01:15:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:15:37.449741 | orchestrator | 2025-05-06 01:15:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:15:40.498082 | orchestrator | 2025-05-06 01:15:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:15:40.498286 | orchestrator | 2025-05-06 01:15:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:15:40.498435 | orchestrator | 2025-05-06 01:15:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:15:43.539575 | orchestrator | 2025-05-06 01:15:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:15:46.582330 | orchestrator | 2025-05-06 01:15:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:15:46.582475 | orchestrator | 2025-05-06 01:15:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:15:49.626578 | orchestrator | 2025-05-06 01:15:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:15:49.626769 | orchestrator | 2025-05-06 01:15:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:15:52.669949 | orchestrator | 2025-05-06 01:15:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:15:52.670234 | orchestrator | 2025-05-06 01:15:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:15:55.716281 | orchestrator | 2025-05-06 01:15:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:15:55.716429 | orchestrator | 2025-05-06 01:15:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:15:58.763485 | orchestrator | 2025-05-06 01:15:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:15:58.763627 | orchestrator | 2025-05-06 01:15:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:16:01.807431 | orchestrator | 2025-05-06 01:15:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:16:01.807562 | orchestrator | 2025-05-06 01:16:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:16:04.847138 | orchestrator | 2025-05-06 01:16:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:16:04.847319 | orchestrator | 2025-05-06 01:16:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:16:07.901867 | orchestrator | 2025-05-06 01:16:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:16:07.902068 | orchestrator | 2025-05-06 01:16:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:16:10.947026 | orchestrator | 2025-05-06 01:16:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:16:10.947198 | orchestrator | 2025-05-06 01:16:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:16:13.997538 | orchestrator | 2025-05-06 01:16:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:16:13.997700 | orchestrator | 2025-05-06 01:16:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:16:17.051914 | orchestrator | 2025-05-06 01:16:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:16:17.052056 | orchestrator | 2025-05-06 01:16:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:16:20.098651 | orchestrator | 2025-05-06 01:16:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:16:20.098817 | orchestrator | 2025-05-06 01:16:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:16:20.098968 | orchestrator | 2025-05-06 01:16:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:16:23.149698 | orchestrator | 2025-05-06 01:16:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:16:26.199580 | orchestrator | 2025-05-06 01:16:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:16:26.199766 | orchestrator | 2025-05-06 01:16:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:16:29.248616 | orchestrator | 2025-05-06 01:16:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:16:29.248761 | orchestrator | 2025-05-06 01:16:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:16:32.296561 | orchestrator | 2025-05-06 01:16:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:16:32.296708 | orchestrator | 2025-05-06 01:16:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:16:35.342104 | orchestrator | 2025-05-06 01:16:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:16:35.342347 | orchestrator | 2025-05-06 01:16:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:16:38.386364 | orchestrator | 2025-05-06 01:16:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:16:38.386507 | orchestrator | 2025-05-06 01:16:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:16:41.431475 | orchestrator | 2025-05-06 01:16:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:16:41.431624 | orchestrator | 2025-05-06 01:16:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:16:44.478605 | orchestrator | 2025-05-06 01:16:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:16:44.478773 | orchestrator | 2025-05-06 01:16:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:16:47.532946 | orchestrator | 2025-05-06 01:16:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:16:47.533047 | orchestrator | 2025-05-06 01:16:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:16:50.581939 | orchestrator | 2025-05-06 01:16:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:16:50.582188 | orchestrator | 2025-05-06 01:16:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:16:53.635088 | orchestrator | 2025-05-06 01:16:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:16:53.635274 | orchestrator | 2025-05-06 01:16:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:16:56.680791 | orchestrator | 2025-05-06 01:16:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:16:56.680958 | orchestrator | 2025-05-06 01:16:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:16:59.728180 | orchestrator | 2025-05-06 01:16:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:16:59.728323 | orchestrator | 2025-05-06 01:16:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:17:02.774283 | orchestrator | 2025-05-06 01:16:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:17:02.774397 | orchestrator | 2025-05-06 01:17:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:17:05.821031 | orchestrator | 2025-05-06 01:17:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:17:05.821257 | orchestrator | 2025-05-06 01:17:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:17:08.876726 | orchestrator | 2025-05-06 01:17:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:17:08.876878 | orchestrator | 2025-05-06 01:17:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:17:11.924544 | orchestrator | 2025-05-06 01:17:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:17:11.924694 | orchestrator | 2025-05-06 01:17:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:17:14.968667 | orchestrator | 2025-05-06 01:17:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:17:14.968843 | orchestrator | 2025-05-06 01:17:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:17:18.015971 | orchestrator | 2025-05-06 01:17:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:17:18.016158 | orchestrator | 2025-05-06 01:17:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:17:21.064291 | orchestrator | 2025-05-06 01:17:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:17:21.064426 | orchestrator | 2025-05-06 01:17:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:17:24.103159 | orchestrator | 2025-05-06 01:17:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:17:24.103309 | orchestrator | 2025-05-06 01:17:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:17:27.150720 | orchestrator | 2025-05-06 01:17:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:17:27.150863 | orchestrator | 2025-05-06 01:17:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:17:30.201377 | orchestrator | 2025-05-06 01:17:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:17:30.201522 | orchestrator | 2025-05-06 01:17:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:17:33.243624 | orchestrator | 2025-05-06 01:17:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:17:33.243793 | orchestrator | 2025-05-06 01:17:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:17:36.293464 | orchestrator | 2025-05-06 01:17:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:17:36.293613 | orchestrator | 2025-05-06 01:17:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:17:39.338238 | orchestrator | 2025-05-06 01:17:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:17:39.338386 | orchestrator | 2025-05-06 01:17:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:17:42.386629 | orchestrator | 2025-05-06 01:17:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:17:42.386814 | orchestrator | 2025-05-06 01:17:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:17:45.430226 | orchestrator | 2025-05-06 01:17:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:17:45.430344 | orchestrator | 2025-05-06 01:17:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:17:48.475483 | orchestrator | 2025-05-06 01:17:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:17:48.475633 | orchestrator | 2025-05-06 01:17:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:17:51.524877 | orchestrator | 2025-05-06 01:17:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:17:51.525110 | orchestrator | 2025-05-06 01:17:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:17:54.570706 | orchestrator | 2025-05-06 01:17:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:17:54.570850 | orchestrator | 2025-05-06 01:17:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:17:57.616251 | orchestrator | 2025-05-06 01:17:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:17:57.616413 | orchestrator | 2025-05-06 01:17:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:18:00.668667 | orchestrator | 2025-05-06 01:17:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:18:00.668813 | orchestrator | 2025-05-06 01:18:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:18:03.714621 | orchestrator | 2025-05-06 01:18:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:18:03.714814 | orchestrator | 2025-05-06 01:18:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:18:06.762195 | orchestrator | 2025-05-06 01:18:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:18:06.762344 | orchestrator | 2025-05-06 01:18:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:18:09.811352 | orchestrator | 2025-05-06 01:18:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:18:09.811516 | orchestrator | 2025-05-06 01:18:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:18:12.860590 | orchestrator | 2025-05-06 01:18:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:18:12.860737 | orchestrator | 2025-05-06 01:18:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:18:15.906383 | orchestrator | 2025-05-06 01:18:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:18:15.906527 | orchestrator | 2025-05-06 01:18:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:18:18.954813 | orchestrator | 2025-05-06 01:18:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:18:18.954971 | orchestrator | 2025-05-06 01:18:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:18:22.003118 | orchestrator | 2025-05-06 01:18:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:18:22.003324 | orchestrator | 2025-05-06 01:18:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:18:25.048324 | orchestrator | 2025-05-06 01:18:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:18:25.048477 | orchestrator | 2025-05-06 01:18:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:18:28.094900 | orchestrator | 2025-05-06 01:18:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:18:28.095072 | orchestrator | 2025-05-06 01:18:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:18:28.095229 | orchestrator | 2025-05-06 01:18:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:18:31.140648 | orchestrator | 2025-05-06 01:18:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:18:34.183367 | orchestrator | 2025-05-06 01:18:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:18:34.183467 | orchestrator | 2025-05-06 01:18:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:18:37.220961 | orchestrator | 2025-05-06 01:18:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:18:37.221184 | orchestrator | 2025-05-06 01:18:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:18:40.271927 | orchestrator | 2025-05-06 01:18:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:18:40.272124 | orchestrator | 2025-05-06 01:18:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:18:43.320206 | orchestrator | 2025-05-06 01:18:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:18:43.320386 | orchestrator | 2025-05-06 01:18:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:18:46.365479 | orchestrator | 2025-05-06 01:18:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:18:46.365601 | orchestrator | 2025-05-06 01:18:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:18:49.414522 | orchestrator | 2025-05-06 01:18:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:18:49.414672 | orchestrator | 2025-05-06 01:18:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:18:52.458222 | orchestrator | 2025-05-06 01:18:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:18:52.458364 | orchestrator | 2025-05-06 01:18:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:18:55.505116 | orchestrator | 2025-05-06 01:18:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:18:55.505255 | orchestrator | 2025-05-06 01:18:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:18:58.556736 | orchestrator | 2025-05-06 01:18:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:18:58.556887 | orchestrator | 2025-05-06 01:18:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:19:01.602702 | orchestrator | 2025-05-06 01:18:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:19:01.602847 | orchestrator | 2025-05-06 01:19:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:19:04.651580 | orchestrator | 2025-05-06 01:19:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:19:04.651739 | orchestrator | 2025-05-06 01:19:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:19:07.698835 | orchestrator | 2025-05-06 01:19:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:19:07.699055 | orchestrator | 2025-05-06 01:19:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:19:10.742523 | orchestrator | 2025-05-06 01:19:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:19:10.742668 | orchestrator | 2025-05-06 01:19:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:19:13.796652 | orchestrator | 2025-05-06 01:19:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:19:13.796800 | orchestrator | 2025-05-06 01:19:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:19:16.849161 | orchestrator | 2025-05-06 01:19:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:19:16.849304 | orchestrator | 2025-05-06 01:19:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:19:19.898877 | orchestrator | 2025-05-06 01:19:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:19:19.899129 | orchestrator | 2025-05-06 01:19:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:19:22.949623 | orchestrator | 2025-05-06 01:19:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:19:22.949801 | orchestrator | 2025-05-06 01:19:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:19:25.995397 | orchestrator | 2025-05-06 01:19:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:19:25.995576 | orchestrator | 2025-05-06 01:19:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:19:29.045697 | orchestrator | 2025-05-06 01:19:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:19:29.045851 | orchestrator | 2025-05-06 01:19:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:19:32.095206 | orchestrator | 2025-05-06 01:19:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:19:32.095374 | orchestrator | 2025-05-06 01:19:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:19:35.145798 | orchestrator | 2025-05-06 01:19:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:19:35.145938 | orchestrator | 2025-05-06 01:19:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:19:38.197378 | orchestrator | 2025-05-06 01:19:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:19:38.197531 | orchestrator | 2025-05-06 01:19:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:19:41.243654 | orchestrator | 2025-05-06 01:19:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:19:41.243826 | orchestrator | 2025-05-06 01:19:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:19:44.296586 | orchestrator | 2025-05-06 01:19:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:19:44.296761 | orchestrator | 2025-05-06 01:19:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:19:47.341502 | orchestrator | 2025-05-06 01:19:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:19:47.341624 | orchestrator | 2025-05-06 01:19:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:19:50.388475 | orchestrator | 2025-05-06 01:19:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:19:50.388632 | orchestrator | 2025-05-06 01:19:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:19:53.430576 | orchestrator | 2025-05-06 01:19:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:19:53.430732 | orchestrator | 2025-05-06 01:19:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:19:56.479636 | orchestrator | 2025-05-06 01:19:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:19:56.479787 | orchestrator | 2025-05-06 01:19:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:19:59.534196 | orchestrator | 2025-05-06 01:19:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:19:59.534338 | orchestrator | 2025-05-06 01:19:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:20:02.587822 | orchestrator | 2025-05-06 01:19:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:20:02.588025 | orchestrator | 2025-05-06 01:20:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:20:05.641284 | orchestrator | 2025-05-06 01:20:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:20:05.641429 | orchestrator | 2025-05-06 01:20:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:20:08.693193 | orchestrator | 2025-05-06 01:20:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:20:08.693374 | orchestrator | 2025-05-06 01:20:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:20:11.743590 | orchestrator | 2025-05-06 01:20:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:20:11.743699 | orchestrator | 2025-05-06 01:20:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:20:14.793588 | orchestrator | 2025-05-06 01:20:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:20:14.793729 | orchestrator | 2025-05-06 01:20:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:20:17.849498 | orchestrator | 2025-05-06 01:20:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:20:17.849640 | orchestrator | 2025-05-06 01:20:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:20:20.892145 | orchestrator | 2025-05-06 01:20:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:20:20.892311 | orchestrator | 2025-05-06 01:20:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:20:23.937633 | orchestrator | 2025-05-06 01:20:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:20:23.937779 | orchestrator | 2025-05-06 01:20:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:20:26.978136 | orchestrator | 2025-05-06 01:20:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:20:26.978290 | orchestrator | 2025-05-06 01:20:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:20:30.022078 | orchestrator | 2025-05-06 01:20:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:20:30.022228 | orchestrator | 2025-05-06 01:20:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:20:33.064561 | orchestrator | 2025-05-06 01:20:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:20:33.064708 | orchestrator | 2025-05-06 01:20:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:20:36.109304 | orchestrator | 2025-05-06 01:20:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:20:36.109423 | orchestrator | 2025-05-06 01:20:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:20:39.154998 | orchestrator | 2025-05-06 01:20:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:20:39.155214 | orchestrator | 2025-05-06 01:20:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:20:42.209375 | orchestrator | 2025-05-06 01:20:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:20:42.209520 | orchestrator | 2025-05-06 01:20:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:20:45.276564 | orchestrator | 2025-05-06 01:20:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:20:45.276714 | orchestrator | 2025-05-06 01:20:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:20:48.325135 | orchestrator | 2025-05-06 01:20:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:20:48.325295 | orchestrator | 2025-05-06 01:20:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:20:51.373329 | orchestrator | 2025-05-06 01:20:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:20:51.373473 | orchestrator | 2025-05-06 01:20:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:20:54.413961 | orchestrator | 2025-05-06 01:20:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:20:54.414170 | orchestrator | 2025-05-06 01:20:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:20:57.457193 | orchestrator | 2025-05-06 01:20:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:20:57.457373 | orchestrator | 2025-05-06 01:20:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:21:00.507458 | orchestrator | 2025-05-06 01:20:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:21:00.507608 | orchestrator | 2025-05-06 01:21:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:21:03.559641 | orchestrator | 2025-05-06 01:21:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:21:03.559792 | orchestrator | 2025-05-06 01:21:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:21:06.618110 | orchestrator | 2025-05-06 01:21:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:21:06.618258 | orchestrator | 2025-05-06 01:21:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:21:09.665573 | orchestrator | 2025-05-06 01:21:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:21:09.665721 | orchestrator | 2025-05-06 01:21:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:21:12.715872 | orchestrator | 2025-05-06 01:21:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:21:12.716050 | orchestrator | 2025-05-06 01:21:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:21:15.779366 | orchestrator | 2025-05-06 01:21:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:21:15.779505 | orchestrator | 2025-05-06 01:21:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:21:18.819667 | orchestrator | 2025-05-06 01:21:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:21:18.819804 | orchestrator | 2025-05-06 01:21:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:21:21.868266 | orchestrator | 2025-05-06 01:21:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:21:21.868421 | orchestrator | 2025-05-06 01:21:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:21:24.921583 | orchestrator | 2025-05-06 01:21:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:21:24.921727 | orchestrator | 2025-05-06 01:21:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:21:27.972345 | orchestrator | 2025-05-06 01:21:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:21:27.972469 | orchestrator | 2025-05-06 01:21:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:21:31.034780 | orchestrator | 2025-05-06 01:21:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:21:31.034998 | orchestrator | 2025-05-06 01:21:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:21:34.086195 | orchestrator | 2025-05-06 01:21:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:21:34.086345 | orchestrator | 2025-05-06 01:21:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:21:37.132862 | orchestrator | 2025-05-06 01:21:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:21:37.133062 | orchestrator | 2025-05-06 01:21:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:21:40.181409 | orchestrator | 2025-05-06 01:21:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:21:40.181947 | orchestrator | 2025-05-06 01:21:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:21:43.229644 | orchestrator | 2025-05-06 01:21:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:21:43.229785 | orchestrator | 2025-05-06 01:21:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:21:46.291350 | orchestrator | 2025-05-06 01:21:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:21:46.291494 | orchestrator | 2025-05-06 01:21:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:21:49.339607 | orchestrator | 2025-05-06 01:21:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:21:49.339750 | orchestrator | 2025-05-06 01:21:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:21:52.390678 | orchestrator | 2025-05-06 01:21:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:21:52.390952 | orchestrator | 2025-05-06 01:21:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:21:55.446447 | orchestrator | 2025-05-06 01:21:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:21:55.446640 | orchestrator | 2025-05-06 01:21:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:21:58.495094 | orchestrator | 2025-05-06 01:21:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:21:58.495238 | orchestrator | 2025-05-06 01:21:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:22:01.545909 | orchestrator | 2025-05-06 01:21:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:22:01.546122 | orchestrator | 2025-05-06 01:22:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:22:04.593082 | orchestrator | 2025-05-06 01:22:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:22:04.593243 | orchestrator | 2025-05-06 01:22:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:22:07.638011 | orchestrator | 2025-05-06 01:22:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:22:07.638227 | orchestrator | 2025-05-06 01:22:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:22:10.684638 | orchestrator | 2025-05-06 01:22:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:22:10.684775 | orchestrator | 2025-05-06 01:22:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:22:13.729775 | orchestrator | 2025-05-06 01:22:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:22:13.729918 | orchestrator | 2025-05-06 01:22:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:22:16.778316 | orchestrator | 2025-05-06 01:22:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:22:16.778498 | orchestrator | 2025-05-06 01:22:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:22:19.823310 | orchestrator | 2025-05-06 01:22:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:22:19.823493 | orchestrator | 2025-05-06 01:22:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:22:22.871218 | orchestrator | 2025-05-06 01:22:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:22:22.871370 | orchestrator | 2025-05-06 01:22:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:22:25.923069 | orchestrator | 2025-05-06 01:22:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:22:25.923212 | orchestrator | 2025-05-06 01:22:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:22:28.972364 | orchestrator | 2025-05-06 01:22:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:22:28.972537 | orchestrator | 2025-05-06 01:22:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:22:32.017024 | orchestrator | 2025-05-06 01:22:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:22:32.017168 | orchestrator | 2025-05-06 01:22:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:22:35.063815 | orchestrator | 2025-05-06 01:22:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:22:35.064010 | orchestrator | 2025-05-06 01:22:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:22:35.064169 | orchestrator | 2025-05-06 01:22:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:22:38.113041 | orchestrator | 2025-05-06 01:22:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:22:41.164808 | orchestrator | 2025-05-06 01:22:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:22:41.165030 | orchestrator | 2025-05-06 01:22:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:22:44.205185 | orchestrator | 2025-05-06 01:22:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:22:44.205348 | orchestrator | 2025-05-06 01:22:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:22:47.253502 | orchestrator | 2025-05-06 01:22:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:22:47.253639 | orchestrator | 2025-05-06 01:22:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:22:50.299928 | orchestrator | 2025-05-06 01:22:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:22:50.300077 | orchestrator | 2025-05-06 01:22:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:22:53.345529 | orchestrator | 2025-05-06 01:22:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:22:53.345652 | orchestrator | 2025-05-06 01:22:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:22:56.395634 | orchestrator | 2025-05-06 01:22:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:22:56.395781 | orchestrator | 2025-05-06 01:22:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:22:59.437267 | orchestrator | 2025-05-06 01:22:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:22:59.437426 | orchestrator | 2025-05-06 01:22:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:22:59.437683 | orchestrator | 2025-05-06 01:22:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:23:02.490589 | orchestrator | 2025-05-06 01:23:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:23:05.544430 | orchestrator | 2025-05-06 01:23:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:23:05.544589 | orchestrator | 2025-05-06 01:23:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:23:08.592977 | orchestrator | 2025-05-06 01:23:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:23:08.593124 | orchestrator | 2025-05-06 01:23:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:23:11.636949 | orchestrator | 2025-05-06 01:23:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:23:11.637095 | orchestrator | 2025-05-06 01:23:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:23:14.688694 | orchestrator | 2025-05-06 01:23:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:23:14.688946 | orchestrator | 2025-05-06 01:23:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:23:17.726430 | orchestrator | 2025-05-06 01:23:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:23:17.726599 | orchestrator | 2025-05-06 01:23:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:23:20.774768 | orchestrator | 2025-05-06 01:23:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:23:20.774945 | orchestrator | 2025-05-06 01:23:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:23:23.821898 | orchestrator | 2025-05-06 01:23:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:23:23.822109 | orchestrator | 2025-05-06 01:23:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:23:26.876054 | orchestrator | 2025-05-06 01:23:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:23:26.876193 | orchestrator | 2025-05-06 01:23:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:23:29.921136 | orchestrator | 2025-05-06 01:23:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:23:29.921271 | orchestrator | 2025-05-06 01:23:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:23:32.971024 | orchestrator | 2025-05-06 01:23:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:23:32.971169 | orchestrator | 2025-05-06 01:23:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:23:36.023151 | orchestrator | 2025-05-06 01:23:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:23:36.023306 | orchestrator | 2025-05-06 01:23:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:23:39.062590 | orchestrator | 2025-05-06 01:23:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:23:39.062737 | orchestrator | 2025-05-06 01:23:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:23:42.109058 | orchestrator | 2025-05-06 01:23:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:23:42.109205 | orchestrator | 2025-05-06 01:23:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:23:45.153457 | orchestrator | 2025-05-06 01:23:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:23:45.153609 | orchestrator | 2025-05-06 01:23:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:23:48.197640 | orchestrator | 2025-05-06 01:23:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:23:48.197748 | orchestrator | 2025-05-06 01:23:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:23:51.248190 | orchestrator | 2025-05-06 01:23:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:23:51.248334 | orchestrator | 2025-05-06 01:23:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:23:54.306936 | orchestrator | 2025-05-06 01:23:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:23:54.307234 | orchestrator | 2025-05-06 01:23:54 | INFO  | Task 6c294b20-c5d3-4859-b998-e1b70f874beb is in state STARTED 2025-05-06 01:23:54.307526 | orchestrator | 2025-05-06 01:23:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:23:54.307884 | orchestrator | 2025-05-06 01:23:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:23:57.360940 | orchestrator | 2025-05-06 01:23:57 | INFO  | Task 6c294b20-c5d3-4859-b998-e1b70f874beb is in state STARTED 2025-05-06 01:23:57.362355 | orchestrator | 2025-05-06 01:23:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:24:00.428325 | orchestrator | 2025-05-06 01:23:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:24:00.428474 | orchestrator | 2025-05-06 01:24:00 | INFO  | Task 6c294b20-c5d3-4859-b998-e1b70f874beb is in state STARTED 2025-05-06 01:24:00.430685 | orchestrator | 2025-05-06 01:24:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:24:00.431074 | orchestrator | 2025-05-06 01:24:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:24:03.479394 | orchestrator | 2025-05-06 01:24:03 | INFO  | Task 6c294b20-c5d3-4859-b998-e1b70f874beb is in state SUCCESS 2025-05-06 01:24:03.480961 | orchestrator | 2025-05-06 01:24:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:24:06.526189 | orchestrator | 2025-05-06 01:24:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:24:06.526333 | orchestrator | 2025-05-06 01:24:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:24:09.576373 | orchestrator | 2025-05-06 01:24:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:24:09.576543 | orchestrator | 2025-05-06 01:24:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:24:12.622284 | orchestrator | 2025-05-06 01:24:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:24:12.622427 | orchestrator | 2025-05-06 01:24:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:24:15.667286 | orchestrator | 2025-05-06 01:24:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:24:15.667449 | orchestrator | 2025-05-06 01:24:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:24:18.720672 | orchestrator | 2025-05-06 01:24:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:24:18.720912 | orchestrator | 2025-05-06 01:24:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:24:21.772062 | orchestrator | 2025-05-06 01:24:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:24:21.772243 | orchestrator | 2025-05-06 01:24:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:24:24.825138 | orchestrator | 2025-05-06 01:24:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:24:24.825313 | orchestrator | 2025-05-06 01:24:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:24:27.867618 | orchestrator | 2025-05-06 01:24:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:24:27.867759 | orchestrator | 2025-05-06 01:24:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:24:30.908082 | orchestrator | 2025-05-06 01:24:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:24:30.908231 | orchestrator | 2025-05-06 01:24:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:24:33.955729 | orchestrator | 2025-05-06 01:24:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:24:33.955923 | orchestrator | 2025-05-06 01:24:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:24:37.011210 | orchestrator | 2025-05-06 01:24:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:24:37.011373 | orchestrator | 2025-05-06 01:24:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:24:40.066717 | orchestrator | 2025-05-06 01:24:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:24:40.066906 | orchestrator | 2025-05-06 01:24:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:24:43.116500 | orchestrator | 2025-05-06 01:24:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:24:43.116648 | orchestrator | 2025-05-06 01:24:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:24:46.170493 | orchestrator | 2025-05-06 01:24:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:24:46.170663 | orchestrator | 2025-05-06 01:24:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:24:49.221070 | orchestrator | 2025-05-06 01:24:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:24:49.221223 | orchestrator | 2025-05-06 01:24:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:24:52.276894 | orchestrator | 2025-05-06 01:24:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:24:52.277043 | orchestrator | 2025-05-06 01:24:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:24:55.324343 | orchestrator | 2025-05-06 01:24:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:24:55.324496 | orchestrator | 2025-05-06 01:24:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:24:58.375817 | orchestrator | 2025-05-06 01:24:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:24:58.375944 | orchestrator | 2025-05-06 01:24:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:25:01.425617 | orchestrator | 2025-05-06 01:24:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:25:01.425835 | orchestrator | 2025-05-06 01:25:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:25:04.473756 | orchestrator | 2025-05-06 01:25:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:25:04.473959 | orchestrator | 2025-05-06 01:25:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:25:07.520830 | orchestrator | 2025-05-06 01:25:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:25:07.520985 | orchestrator | 2025-05-06 01:25:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:25:07.521138 | orchestrator | 2025-05-06 01:25:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:25:10.590357 | orchestrator | 2025-05-06 01:25:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:25:13.648565 | orchestrator | 2025-05-06 01:25:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:25:13.648709 | orchestrator | 2025-05-06 01:25:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:25:16.701832 | orchestrator | 2025-05-06 01:25:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:25:16.701985 | orchestrator | 2025-05-06 01:25:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:25:19.761409 | orchestrator | 2025-05-06 01:25:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:25:19.761554 | orchestrator | 2025-05-06 01:25:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:25:22.813242 | orchestrator | 2025-05-06 01:25:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:25:22.813383 | orchestrator | 2025-05-06 01:25:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:25:25.874439 | orchestrator | 2025-05-06 01:25:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:25:25.874589 | orchestrator | 2025-05-06 01:25:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:25:28.921403 | orchestrator | 2025-05-06 01:25:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:25:28.921591 | orchestrator | 2025-05-06 01:25:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:25:31.984142 | orchestrator | 2025-05-06 01:25:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:25:31.984292 | orchestrator | 2025-05-06 01:25:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:25:35.038815 | orchestrator | 2025-05-06 01:25:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:25:35.038990 | orchestrator | 2025-05-06 01:25:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:25:38.085248 | orchestrator | 2025-05-06 01:25:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:25:38.085408 | orchestrator | 2025-05-06 01:25:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:25:41.141821 | orchestrator | 2025-05-06 01:25:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:25:41.141966 | orchestrator | 2025-05-06 01:25:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:25:44.195544 | orchestrator | 2025-05-06 01:25:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:25:44.195670 | orchestrator | 2025-05-06 01:25:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:25:47.242083 | orchestrator | 2025-05-06 01:25:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:25:47.242181 | orchestrator | 2025-05-06 01:25:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:25:50.291841 | orchestrator | 2025-05-06 01:25:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:25:50.292007 | orchestrator | 2025-05-06 01:25:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:25:53.332513 | orchestrator | 2025-05-06 01:25:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:25:53.332655 | orchestrator | 2025-05-06 01:25:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:25:56.384530 | orchestrator | 2025-05-06 01:25:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:25:56.384675 | orchestrator | 2025-05-06 01:25:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:25:59.441639 | orchestrator | 2025-05-06 01:25:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:25:59.441840 | orchestrator | 2025-05-06 01:25:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:26:02.502672 | orchestrator | 2025-05-06 01:25:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:26:02.502853 | orchestrator | 2025-05-06 01:26:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:26:05.556967 | orchestrator | 2025-05-06 01:26:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:26:05.557104 | orchestrator | 2025-05-06 01:26:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:26:08.604829 | orchestrator | 2025-05-06 01:26:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:26:08.604987 | orchestrator | 2025-05-06 01:26:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:26:11.656224 | orchestrator | 2025-05-06 01:26:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:26:11.656395 | orchestrator | 2025-05-06 01:26:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:26:14.705450 | orchestrator | 2025-05-06 01:26:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:26:14.705616 | orchestrator | 2025-05-06 01:26:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:26:17.760401 | orchestrator | 2025-05-06 01:26:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:26:17.760550 | orchestrator | 2025-05-06 01:26:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:26:20.814386 | orchestrator | 2025-05-06 01:26:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:26:20.814532 | orchestrator | 2025-05-06 01:26:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:26:23.860945 | orchestrator | 2025-05-06 01:26:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:26:23.861089 | orchestrator | 2025-05-06 01:26:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:26:26.913286 | orchestrator | 2025-05-06 01:26:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:26:26.913430 | orchestrator | 2025-05-06 01:26:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:26:26.913578 | orchestrator | 2025-05-06 01:26:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:26:29.964123 | orchestrator | 2025-05-06 01:26:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:26:33.013276 | orchestrator | 2025-05-06 01:26:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:26:33.013459 | orchestrator | 2025-05-06 01:26:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:26:36.061814 | orchestrator | 2025-05-06 01:26:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:26:36.061983 | orchestrator | 2025-05-06 01:26:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:26:39.108143 | orchestrator | 2025-05-06 01:26:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:26:39.108316 | orchestrator | 2025-05-06 01:26:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:26:42.159811 | orchestrator | 2025-05-06 01:26:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:26:42.159956 | orchestrator | 2025-05-06 01:26:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:26:45.205997 | orchestrator | 2025-05-06 01:26:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:26:45.206144 | orchestrator | 2025-05-06 01:26:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:26:48.256490 | orchestrator | 2025-05-06 01:26:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:26:48.256637 | orchestrator | 2025-05-06 01:26:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:26:51.306654 | orchestrator | 2025-05-06 01:26:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:26:51.306902 | orchestrator | 2025-05-06 01:26:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:26:54.367475 | orchestrator | 2025-05-06 01:26:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:26:54.367592 | orchestrator | 2025-05-06 01:26:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:26:57.420706 | orchestrator | 2025-05-06 01:26:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:26:57.420939 | orchestrator | 2025-05-06 01:26:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:27:00.475176 | orchestrator | 2025-05-06 01:26:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:27:00.475321 | orchestrator | 2025-05-06 01:27:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:27:03.522142 | orchestrator | 2025-05-06 01:27:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:27:03.522317 | orchestrator | 2025-05-06 01:27:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:27:06.573987 | orchestrator | 2025-05-06 01:27:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:27:06.574185 | orchestrator | 2025-05-06 01:27:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:27:09.619052 | orchestrator | 2025-05-06 01:27:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:27:09.619233 | orchestrator | 2025-05-06 01:27:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:27:12.664569 | orchestrator | 2025-05-06 01:27:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:27:12.664703 | orchestrator | 2025-05-06 01:27:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:27:15.719251 | orchestrator | 2025-05-06 01:27:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:27:15.719389 | orchestrator | 2025-05-06 01:27:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:27:18.769039 | orchestrator | 2025-05-06 01:27:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:27:18.769178 | orchestrator | 2025-05-06 01:27:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:27:21.819543 | orchestrator | 2025-05-06 01:27:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:27:21.819695 | orchestrator | 2025-05-06 01:27:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:27:24.869211 | orchestrator | 2025-05-06 01:27:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:27:24.869345 | orchestrator | 2025-05-06 01:27:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:27:27.922230 | orchestrator | 2025-05-06 01:27:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:27:27.922378 | orchestrator | 2025-05-06 01:27:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:27:30.974515 | orchestrator | 2025-05-06 01:27:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:27:30.974665 | orchestrator | 2025-05-06 01:27:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:27:34.024348 | orchestrator | 2025-05-06 01:27:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:27:34.024501 | orchestrator | 2025-05-06 01:27:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:27:37.075890 | orchestrator | 2025-05-06 01:27:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:27:37.076070 | orchestrator | 2025-05-06 01:27:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:27:40.120301 | orchestrator | 2025-05-06 01:27:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:27:40.120443 | orchestrator | 2025-05-06 01:27:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:27:43.172097 | orchestrator | 2025-05-06 01:27:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:27:43.172237 | orchestrator | 2025-05-06 01:27:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:27:46.226082 | orchestrator | 2025-05-06 01:27:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:27:46.226249 | orchestrator | 2025-05-06 01:27:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:27:49.269552 | orchestrator | 2025-05-06 01:27:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:27:49.269748 | orchestrator | 2025-05-06 01:27:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:27:52.323916 | orchestrator | 2025-05-06 01:27:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:27:52.324049 | orchestrator | 2025-05-06 01:27:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:27:55.368975 | orchestrator | 2025-05-06 01:27:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:27:55.369128 | orchestrator | 2025-05-06 01:27:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:27:58.421538 | orchestrator | 2025-05-06 01:27:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:27:58.421674 | orchestrator | 2025-05-06 01:27:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:28:01.462676 | orchestrator | 2025-05-06 01:27:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:28:01.462849 | orchestrator | 2025-05-06 01:28:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:28:04.507513 | orchestrator | 2025-05-06 01:28:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:28:04.507667 | orchestrator | 2025-05-06 01:28:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:28:07.555039 | orchestrator | 2025-05-06 01:28:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:28:07.555180 | orchestrator | 2025-05-06 01:28:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:28:10.604122 | orchestrator | 2025-05-06 01:28:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:28:10.604289 | orchestrator | 2025-05-06 01:28:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:28:13.655353 | orchestrator | 2025-05-06 01:28:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:28:13.655521 | orchestrator | 2025-05-06 01:28:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:28:16.699055 | orchestrator | 2025-05-06 01:28:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:28:16.699206 | orchestrator | 2025-05-06 01:28:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:28:19.746336 | orchestrator | 2025-05-06 01:28:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:28:19.746481 | orchestrator | 2025-05-06 01:28:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:28:22.794282 | orchestrator | 2025-05-06 01:28:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:28:22.794463 | orchestrator | 2025-05-06 01:28:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:28:25.858584 | orchestrator | 2025-05-06 01:28:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:28:25.858817 | orchestrator | 2025-05-06 01:28:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:28:28.913179 | orchestrator | 2025-05-06 01:28:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:28:28.913330 | orchestrator | 2025-05-06 01:28:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:28:31.969632 | orchestrator | 2025-05-06 01:28:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:28:31.969803 | orchestrator | 2025-05-06 01:28:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:28:35.017772 | orchestrator | 2025-05-06 01:28:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:28:35.017921 | orchestrator | 2025-05-06 01:28:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:28:38.068174 | orchestrator | 2025-05-06 01:28:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:28:38.068319 | orchestrator | 2025-05-06 01:28:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:28:41.107215 | orchestrator | 2025-05-06 01:28:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:28:41.107371 | orchestrator | 2025-05-06 01:28:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:28:44.155599 | orchestrator | 2025-05-06 01:28:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:28:44.155784 | orchestrator | 2025-05-06 01:28:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:28:47.213339 | orchestrator | 2025-05-06 01:28:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:28:47.213447 | orchestrator | 2025-05-06 01:28:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:28:50.265055 | orchestrator | 2025-05-06 01:28:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:28:50.265205 | orchestrator | 2025-05-06 01:28:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:28:53.312276 | orchestrator | 2025-05-06 01:28:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:28:53.312425 | orchestrator | 2025-05-06 01:28:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:28:56.361805 | orchestrator | 2025-05-06 01:28:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:28:56.361946 | orchestrator | 2025-05-06 01:28:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:28:59.410633 | orchestrator | 2025-05-06 01:28:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:28:59.410847 | orchestrator | 2025-05-06 01:28:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:29:02.471007 | orchestrator | 2025-05-06 01:28:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:29:02.471155 | orchestrator | 2025-05-06 01:29:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:29:05.525070 | orchestrator | 2025-05-06 01:29:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:29:05.525242 | orchestrator | 2025-05-06 01:29:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:29:08.571613 | orchestrator | 2025-05-06 01:29:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:29:08.571848 | orchestrator | 2025-05-06 01:29:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:29:11.630294 | orchestrator | 2025-05-06 01:29:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:29:11.630477 | orchestrator | 2025-05-06 01:29:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:29:14.680273 | orchestrator | 2025-05-06 01:29:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:29:14.680414 | orchestrator | 2025-05-06 01:29:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:29:17.733218 | orchestrator | 2025-05-06 01:29:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:29:17.733375 | orchestrator | 2025-05-06 01:29:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:29:20.783002 | orchestrator | 2025-05-06 01:29:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:29:20.783154 | orchestrator | 2025-05-06 01:29:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:29:23.828219 | orchestrator | 2025-05-06 01:29:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:29:23.828373 | orchestrator | 2025-05-06 01:29:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:29:26.876446 | orchestrator | 2025-05-06 01:29:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:29:26.876592 | orchestrator | 2025-05-06 01:29:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:29:29.930912 | orchestrator | 2025-05-06 01:29:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:29:29.931086 | orchestrator | 2025-05-06 01:29:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:29:32.988362 | orchestrator | 2025-05-06 01:29:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:29:32.988503 | orchestrator | 2025-05-06 01:29:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:29:36.034082 | orchestrator | 2025-05-06 01:29:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:29:36.034247 | orchestrator | 2025-05-06 01:29:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:29:39.083986 | orchestrator | 2025-05-06 01:29:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:29:39.084130 | orchestrator | 2025-05-06 01:29:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:29:42.127109 | orchestrator | 2025-05-06 01:29:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:29:42.127258 | orchestrator | 2025-05-06 01:29:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:29:45.174150 | orchestrator | 2025-05-06 01:29:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:29:45.174303 | orchestrator | 2025-05-06 01:29:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:29:48.213518 | orchestrator | 2025-05-06 01:29:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:29:48.213726 | orchestrator | 2025-05-06 01:29:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:29:51.267166 | orchestrator | 2025-05-06 01:29:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:29:51.267310 | orchestrator | 2025-05-06 01:29:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:29:54.317897 | orchestrator | 2025-05-06 01:29:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:29:54.318161 | orchestrator | 2025-05-06 01:29:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:29:57.364519 | orchestrator | 2025-05-06 01:29:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:29:57.364720 | orchestrator | 2025-05-06 01:29:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:30:00.405218 | orchestrator | 2025-05-06 01:29:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:30:00.405368 | orchestrator | 2025-05-06 01:30:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:30:03.453170 | orchestrator | 2025-05-06 01:30:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:30:03.453311 | orchestrator | 2025-05-06 01:30:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:30:06.500607 | orchestrator | 2025-05-06 01:30:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:30:06.500801 | orchestrator | 2025-05-06 01:30:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:30:09.550354 | orchestrator | 2025-05-06 01:30:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:30:09.550496 | orchestrator | 2025-05-06 01:30:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:30:12.594003 | orchestrator | 2025-05-06 01:30:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:30:12.594219 | orchestrator | 2025-05-06 01:30:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:30:15.646596 | orchestrator | 2025-05-06 01:30:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:30:15.646822 | orchestrator | 2025-05-06 01:30:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:30:18.693383 | orchestrator | 2025-05-06 01:30:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:30:18.693530 | orchestrator | 2025-05-06 01:30:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:30:21.740112 | orchestrator | 2025-05-06 01:30:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:30:21.740261 | orchestrator | 2025-05-06 01:30:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:30:24.785710 | orchestrator | 2025-05-06 01:30:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:30:24.785869 | orchestrator | 2025-05-06 01:30:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:30:27.836196 | orchestrator | 2025-05-06 01:30:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:30:27.836383 | orchestrator | 2025-05-06 01:30:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:30:30.884900 | orchestrator | 2025-05-06 01:30:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:30:30.885181 | orchestrator | 2025-05-06 01:30:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:30:33.940597 | orchestrator | 2025-05-06 01:30:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:30:33.940867 | orchestrator | 2025-05-06 01:30:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:30:36.982938 | orchestrator | 2025-05-06 01:30:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:30:36.983085 | orchestrator | 2025-05-06 01:30:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:30:40.033991 | orchestrator | 2025-05-06 01:30:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:30:40.034236 | orchestrator | 2025-05-06 01:30:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:30:43.087515 | orchestrator | 2025-05-06 01:30:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:30:43.087717 | orchestrator | 2025-05-06 01:30:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:30:46.135171 | orchestrator | 2025-05-06 01:30:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:30:46.135341 | orchestrator | 2025-05-06 01:30:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:30:49.176917 | orchestrator | 2025-05-06 01:30:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:30:49.177063 | orchestrator | 2025-05-06 01:30:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:30:52.221873 | orchestrator | 2025-05-06 01:30:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:30:52.222095 | orchestrator | 2025-05-06 01:30:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:30:55.271651 | orchestrator | 2025-05-06 01:30:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:30:55.271828 | orchestrator | 2025-05-06 01:30:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:30:58.329322 | orchestrator | 2025-05-06 01:30:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:30:58.329511 | orchestrator | 2025-05-06 01:30:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:31:01.380778 | orchestrator | 2025-05-06 01:30:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:31:01.380930 | orchestrator | 2025-05-06 01:31:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:31:04.441191 | orchestrator | 2025-05-06 01:31:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:31:04.441347 | orchestrator | 2025-05-06 01:31:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:31:07.500063 | orchestrator | 2025-05-06 01:31:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:31:07.500211 | orchestrator | 2025-05-06 01:31:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:31:10.551350 | orchestrator | 2025-05-06 01:31:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:31:10.551492 | orchestrator | 2025-05-06 01:31:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:31:13.600219 | orchestrator | 2025-05-06 01:31:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:31:13.600367 | orchestrator | 2025-05-06 01:31:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:31:16.657200 | orchestrator | 2025-05-06 01:31:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:31:16.657350 | orchestrator | 2025-05-06 01:31:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:31:19.719427 | orchestrator | 2025-05-06 01:31:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:31:19.719572 | orchestrator | 2025-05-06 01:31:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:31:22.779679 | orchestrator | 2025-05-06 01:31:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:31:22.779825 | orchestrator | 2025-05-06 01:31:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:31:25.829851 | orchestrator | 2025-05-06 01:31:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:31:25.830096 | orchestrator | 2025-05-06 01:31:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:31:28.882647 | orchestrator | 2025-05-06 01:31:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:31:28.882789 | orchestrator | 2025-05-06 01:31:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:31:28.882934 | orchestrator | 2025-05-06 01:31:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:31:31.930361 | orchestrator | 2025-05-06 01:31:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:31:34.983961 | orchestrator | 2025-05-06 01:31:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:31:34.984107 | orchestrator | 2025-05-06 01:31:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:31:38.034146 | orchestrator | 2025-05-06 01:31:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:31:38.034288 | orchestrator | 2025-05-06 01:31:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:31:41.086999 | orchestrator | 2025-05-06 01:31:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:31:41.087141 | orchestrator | 2025-05-06 01:31:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:31:44.135148 | orchestrator | 2025-05-06 01:31:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:31:44.135300 | orchestrator | 2025-05-06 01:31:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:31:47.185733 | orchestrator | 2025-05-06 01:31:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:31:47.185849 | orchestrator | 2025-05-06 01:31:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:31:47.185928 | orchestrator | 2025-05-06 01:31:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:31:50.245190 | orchestrator | 2025-05-06 01:31:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:31:53.289336 | orchestrator | 2025-05-06 01:31:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:31:53.289478 | orchestrator | 2025-05-06 01:31:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:31:56.334790 | orchestrator | 2025-05-06 01:31:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:31:56.334953 | orchestrator | 2025-05-06 01:31:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:31:59.392377 | orchestrator | 2025-05-06 01:31:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:31:59.392527 | orchestrator | 2025-05-06 01:31:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:31:59.392708 | orchestrator | 2025-05-06 01:31:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:32:02.431925 | orchestrator | 2025-05-06 01:32:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:32:05.483893 | orchestrator | 2025-05-06 01:32:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:32:05.484103 | orchestrator | 2025-05-06 01:32:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:32:05.484265 | orchestrator | 2025-05-06 01:32:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:32:08.533139 | orchestrator | 2025-05-06 01:32:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:32:11.581638 | orchestrator | 2025-05-06 01:32:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:32:11.581814 | orchestrator | 2025-05-06 01:32:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:32:14.623232 | orchestrator | 2025-05-06 01:32:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:32:14.623379 | orchestrator | 2025-05-06 01:32:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:32:17.668170 | orchestrator | 2025-05-06 01:32:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:32:17.668316 | orchestrator | 2025-05-06 01:32:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:32:20.714104 | orchestrator | 2025-05-06 01:32:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:32:20.714252 | orchestrator | 2025-05-06 01:32:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:32:23.766336 | orchestrator | 2025-05-06 01:32:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:32:23.766480 | orchestrator | 2025-05-06 01:32:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:32:26.823288 | orchestrator | 2025-05-06 01:32:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:32:26.823445 | orchestrator | 2025-05-06 01:32:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:32:29.871344 | orchestrator | 2025-05-06 01:32:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:32:29.871496 | orchestrator | 2025-05-06 01:32:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:32:32.920827 | orchestrator | 2025-05-06 01:32:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:32:32.920975 | orchestrator | 2025-05-06 01:32:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:32:35.970255 | orchestrator | 2025-05-06 01:32:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:32:35.970380 | orchestrator | 2025-05-06 01:32:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:32:39.019371 | orchestrator | 2025-05-06 01:32:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:32:39.019514 | orchestrator | 2025-05-06 01:32:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:32:42.066138 | orchestrator | 2025-05-06 01:32:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:32:42.066283 | orchestrator | 2025-05-06 01:32:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:32:45.114634 | orchestrator | 2025-05-06 01:32:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:32:45.114824 | orchestrator | 2025-05-06 01:32:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:32:48.165542 | orchestrator | 2025-05-06 01:32:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:32:48.165749 | orchestrator | 2025-05-06 01:32:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:32:51.221537 | orchestrator | 2025-05-06 01:32:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:32:51.221782 | orchestrator | 2025-05-06 01:32:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:32:54.270190 | orchestrator | 2025-05-06 01:32:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:32:54.270333 | orchestrator | 2025-05-06 01:32:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:32:57.317206 | orchestrator | 2025-05-06 01:32:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:32:57.317363 | orchestrator | 2025-05-06 01:32:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:33:00.370189 | orchestrator | 2025-05-06 01:32:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:33:00.370343 | orchestrator | 2025-05-06 01:33:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:33:03.420604 | orchestrator | 2025-05-06 01:33:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:33:03.420753 | orchestrator | 2025-05-06 01:33:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:33:06.472079 | orchestrator | 2025-05-06 01:33:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:33:06.472232 | orchestrator | 2025-05-06 01:33:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:33:09.523692 | orchestrator | 2025-05-06 01:33:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:33:09.523855 | orchestrator | 2025-05-06 01:33:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:33:12.565736 | orchestrator | 2025-05-06 01:33:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:33:12.565888 | orchestrator | 2025-05-06 01:33:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:33:15.620378 | orchestrator | 2025-05-06 01:33:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:33:15.620618 | orchestrator | 2025-05-06 01:33:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:33:18.665519 | orchestrator | 2025-05-06 01:33:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:33:18.665743 | orchestrator | 2025-05-06 01:33:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:33:21.722289 | orchestrator | 2025-05-06 01:33:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:33:21.722412 | orchestrator | 2025-05-06 01:33:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:33:24.771037 | orchestrator | 2025-05-06 01:33:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:33:24.771212 | orchestrator | 2025-05-06 01:33:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:33:27.821746 | orchestrator | 2025-05-06 01:33:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:33:27.821900 | orchestrator | 2025-05-06 01:33:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:33:30.869637 | orchestrator | 2025-05-06 01:33:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:33:30.869783 | orchestrator | 2025-05-06 01:33:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:33:33.921746 | orchestrator | 2025-05-06 01:33:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:33:33.921898 | orchestrator | 2025-05-06 01:33:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:33:36.970198 | orchestrator | 2025-05-06 01:33:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:33:36.970347 | orchestrator | 2025-05-06 01:33:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:33:40.017674 | orchestrator | 2025-05-06 01:33:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:33:40.017823 | orchestrator | 2025-05-06 01:33:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:33:43.066583 | orchestrator | 2025-05-06 01:33:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:33:43.066730 | orchestrator | 2025-05-06 01:33:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:33:46.112044 | orchestrator | 2025-05-06 01:33:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:33:46.112209 | orchestrator | 2025-05-06 01:33:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:33:49.151721 | orchestrator | 2025-05-06 01:33:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:33:49.151894 | orchestrator | 2025-05-06 01:33:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:33:52.206325 | orchestrator | 2025-05-06 01:33:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:33:52.206467 | orchestrator | 2025-05-06 01:33:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:33:55.258304 | orchestrator | 2025-05-06 01:33:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:33:55.258501 | orchestrator | 2025-05-06 01:33:55 | INFO  | Task 97d8e8e8-c677-49b7-a277-93b7321fc602 is in state STARTED 2025-05-06 01:33:55.261373 | orchestrator | 2025-05-06 01:33:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:33:55.261770 | orchestrator | 2025-05-06 01:33:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:33:58.314937 | orchestrator | 2025-05-06 01:33:58 | INFO  | Task 97d8e8e8-c677-49b7-a277-93b7321fc602 is in state STARTED 2025-05-06 01:33:58.316966 | orchestrator | 2025-05-06 01:33:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:34:01.365268 | orchestrator | 2025-05-06 01:33:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:34:01.365399 | orchestrator | 2025-05-06 01:34:01 | INFO  | Task 97d8e8e8-c677-49b7-a277-93b7321fc602 is in state STARTED 2025-05-06 01:34:04.407073 | orchestrator | 2025-05-06 01:34:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:34:04.407220 | orchestrator | 2025-05-06 01:34:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:34:04.407258 | orchestrator | 2025-05-06 01:34:04 | INFO  | Task 97d8e8e8-c677-49b7-a277-93b7321fc602 is in state SUCCESS 2025-05-06 01:34:04.408083 | orchestrator | 2025-05-06 01:34:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:34:07.456978 | orchestrator | 2025-05-06 01:34:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:34:07.457169 | orchestrator | 2025-05-06 01:34:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:34:10.506000 | orchestrator | 2025-05-06 01:34:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:34:10.506203 | orchestrator | 2025-05-06 01:34:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:34:13.560110 | orchestrator | 2025-05-06 01:34:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:34:13.560254 | orchestrator | 2025-05-06 01:34:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:34:16.612449 | orchestrator | 2025-05-06 01:34:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:34:16.612660 | orchestrator | 2025-05-06 01:34:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:34:19.667796 | orchestrator | 2025-05-06 01:34:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:34:19.667984 | orchestrator | 2025-05-06 01:34:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:34:22.715954 | orchestrator | 2025-05-06 01:34:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:34:22.716133 | orchestrator | 2025-05-06 01:34:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:34:25.766425 | orchestrator | 2025-05-06 01:34:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:34:25.766623 | orchestrator | 2025-05-06 01:34:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:34:28.820602 | orchestrator | 2025-05-06 01:34:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:34:28.820772 | orchestrator | 2025-05-06 01:34:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:34:31.868115 | orchestrator | 2025-05-06 01:34:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:34:31.868268 | orchestrator | 2025-05-06 01:34:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:34:34.913891 | orchestrator | 2025-05-06 01:34:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:34:34.914100 | orchestrator | 2025-05-06 01:34:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:34:37.959367 | orchestrator | 2025-05-06 01:34:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:34:37.959553 | orchestrator | 2025-05-06 01:34:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:34:41.014564 | orchestrator | 2025-05-06 01:34:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:34:41.014717 | orchestrator | 2025-05-06 01:34:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:34:44.060683 | orchestrator | 2025-05-06 01:34:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:34:44.060831 | orchestrator | 2025-05-06 01:34:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:34:47.106152 | orchestrator | 2025-05-06 01:34:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:34:47.106333 | orchestrator | 2025-05-06 01:34:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:34:50.145705 | orchestrator | 2025-05-06 01:34:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:34:50.145902 | orchestrator | 2025-05-06 01:34:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:34:53.195071 | orchestrator | 2025-05-06 01:34:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:34:53.195212 | orchestrator | 2025-05-06 01:34:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:34:56.241387 | orchestrator | 2025-05-06 01:34:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:34:56.241586 | orchestrator | 2025-05-06 01:34:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:34:59.289064 | orchestrator | 2025-05-06 01:34:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:34:59.289222 | orchestrator | 2025-05-06 01:34:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:35:02.335299 | orchestrator | 2025-05-06 01:34:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:35:02.335566 | orchestrator | 2025-05-06 01:35:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:35:05.383170 | orchestrator | 2025-05-06 01:35:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:35:05.383313 | orchestrator | 2025-05-06 01:35:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:35:08.433357 | orchestrator | 2025-05-06 01:35:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:35:08.433605 | orchestrator | 2025-05-06 01:35:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:35:11.483167 | orchestrator | 2025-05-06 01:35:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:35:11.483316 | orchestrator | 2025-05-06 01:35:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:35:14.532890 | orchestrator | 2025-05-06 01:35:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:35:14.533040 | orchestrator | 2025-05-06 01:35:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:35:17.581080 | orchestrator | 2025-05-06 01:35:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:35:17.581246 | orchestrator | 2025-05-06 01:35:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:35:20.627361 | orchestrator | 2025-05-06 01:35:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:35:20.627565 | orchestrator | 2025-05-06 01:35:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:35:23.676744 | orchestrator | 2025-05-06 01:35:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:35:23.676890 | orchestrator | 2025-05-06 01:35:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:35:26.734279 | orchestrator | 2025-05-06 01:35:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:35:26.734424 | orchestrator | 2025-05-06 01:35:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:35:29.785158 | orchestrator | 2025-05-06 01:35:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:35:29.785282 | orchestrator | 2025-05-06 01:35:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:35:32.830743 | orchestrator | 2025-05-06 01:35:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:35:32.830902 | orchestrator | 2025-05-06 01:35:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:35:35.881307 | orchestrator | 2025-05-06 01:35:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:35:35.881452 | orchestrator | 2025-05-06 01:35:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:35:38.928307 | orchestrator | 2025-05-06 01:35:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:35:38.928515 | orchestrator | 2025-05-06 01:35:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:35:41.979035 | orchestrator | 2025-05-06 01:35:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:35:41.979175 | orchestrator | 2025-05-06 01:35:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:35:45.025040 | orchestrator | 2025-05-06 01:35:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:35:45.025215 | orchestrator | 2025-05-06 01:35:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:35:45.025525 | orchestrator | 2025-05-06 01:35:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:35:48.073462 | orchestrator | 2025-05-06 01:35:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:35:51.115249 | orchestrator | 2025-05-06 01:35:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:35:51.115403 | orchestrator | 2025-05-06 01:35:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:35:54.159617 | orchestrator | 2025-05-06 01:35:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:35:54.159789 | orchestrator | 2025-05-06 01:35:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:35:57.209959 | orchestrator | 2025-05-06 01:35:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:35:57.210177 | orchestrator | 2025-05-06 01:35:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:36:00.257211 | orchestrator | 2025-05-06 01:35:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:36:00.257362 | orchestrator | 2025-05-06 01:36:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:36:03.313761 | orchestrator | 2025-05-06 01:36:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:36:03.313912 | orchestrator | 2025-05-06 01:36:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:36:06.364546 | orchestrator | 2025-05-06 01:36:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:36:06.364691 | orchestrator | 2025-05-06 01:36:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:36:09.420372 | orchestrator | 2025-05-06 01:36:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:36:09.420554 | orchestrator | 2025-05-06 01:36:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:36:12.465168 | orchestrator | 2025-05-06 01:36:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:36:12.465315 | orchestrator | 2025-05-06 01:36:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:36:15.514875 | orchestrator | 2025-05-06 01:36:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:36:15.515027 | orchestrator | 2025-05-06 01:36:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:36:18.574910 | orchestrator | 2025-05-06 01:36:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:36:18.575087 | orchestrator | 2025-05-06 01:36:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:36:21.623144 | orchestrator | 2025-05-06 01:36:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:36:21.623299 | orchestrator | 2025-05-06 01:36:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:36:24.670238 | orchestrator | 2025-05-06 01:36:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:36:24.670387 | orchestrator | 2025-05-06 01:36:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:36:27.721385 | orchestrator | 2025-05-06 01:36:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:36:27.721598 | orchestrator | 2025-05-06 01:36:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:36:30.767358 | orchestrator | 2025-05-06 01:36:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:36:30.767553 | orchestrator | 2025-05-06 01:36:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:36:33.818420 | orchestrator | 2025-05-06 01:36:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:36:33.818598 | orchestrator | 2025-05-06 01:36:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:36:36.864375 | orchestrator | 2025-05-06 01:36:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:36:36.864563 | orchestrator | 2025-05-06 01:36:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:36:39.914892 | orchestrator | 2025-05-06 01:36:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:36:39.915036 | orchestrator | 2025-05-06 01:36:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:36:42.955563 | orchestrator | 2025-05-06 01:36:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:36:42.955766 | orchestrator | 2025-05-06 01:36:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:36:46.000026 | orchestrator | 2025-05-06 01:36:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:36:46.000202 | orchestrator | 2025-05-06 01:36:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:36:49.047513 | orchestrator | 2025-05-06 01:36:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:36:49.047658 | orchestrator | 2025-05-06 01:36:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:36:52.095387 | orchestrator | 2025-05-06 01:36:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:36:52.095575 | orchestrator | 2025-05-06 01:36:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:36:55.143050 | orchestrator | 2025-05-06 01:36:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:36:55.143198 | orchestrator | 2025-05-06 01:36:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:36:58.185322 | orchestrator | 2025-05-06 01:36:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:36:58.185528 | orchestrator | 2025-05-06 01:36:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:37:01.233911 | orchestrator | 2025-05-06 01:36:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:37:01.234121 | orchestrator | 2025-05-06 01:37:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:37:04.280267 | orchestrator | 2025-05-06 01:37:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:37:04.280466 | orchestrator | 2025-05-06 01:37:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:37:07.326167 | orchestrator | 2025-05-06 01:37:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:37:07.326320 | orchestrator | 2025-05-06 01:37:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:37:10.378386 | orchestrator | 2025-05-06 01:37:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:37:10.378595 | orchestrator | 2025-05-06 01:37:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:37:13.426082 | orchestrator | 2025-05-06 01:37:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:37:13.426230 | orchestrator | 2025-05-06 01:37:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:37:16.464940 | orchestrator | 2025-05-06 01:37:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:37:16.465085 | orchestrator | 2025-05-06 01:37:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:37:19.514935 | orchestrator | 2025-05-06 01:37:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:37:19.515073 | orchestrator | 2025-05-06 01:37:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:37:22.565957 | orchestrator | 2025-05-06 01:37:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:37:22.566169 | orchestrator | 2025-05-06 01:37:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:37:25.613007 | orchestrator | 2025-05-06 01:37:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:37:25.613175 | orchestrator | 2025-05-06 01:37:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:37:28.661731 | orchestrator | 2025-05-06 01:37:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:37:28.661907 | orchestrator | 2025-05-06 01:37:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:37:31.710922 | orchestrator | 2025-05-06 01:37:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:37:31.711070 | orchestrator | 2025-05-06 01:37:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:37:34.761172 | orchestrator | 2025-05-06 01:37:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:37:34.761324 | orchestrator | 2025-05-06 01:37:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:37:37.810366 | orchestrator | 2025-05-06 01:37:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:37:37.810556 | orchestrator | 2025-05-06 01:37:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:37:40.865738 | orchestrator | 2025-05-06 01:37:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:37:40.865888 | orchestrator | 2025-05-06 01:37:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:37:43.913153 | orchestrator | 2025-05-06 01:37:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:37:43.913332 | orchestrator | 2025-05-06 01:37:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:37:46.959314 | orchestrator | 2025-05-06 01:37:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:37:46.959535 | orchestrator | 2025-05-06 01:37:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:37:49.996437 | orchestrator | 2025-05-06 01:37:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:37:49.996584 | orchestrator | 2025-05-06 01:37:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:37:53.048008 | orchestrator | 2025-05-06 01:37:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:37:53.048167 | orchestrator | 2025-05-06 01:37:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:37:56.102104 | orchestrator | 2025-05-06 01:37:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:37:56.102267 | orchestrator | 2025-05-06 01:37:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:37:59.157592 | orchestrator | 2025-05-06 01:37:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:37:59.157738 | orchestrator | 2025-05-06 01:37:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:38:02.207272 | orchestrator | 2025-05-06 01:37:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:38:02.207454 | orchestrator | 2025-05-06 01:38:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:38:05.259958 | orchestrator | 2025-05-06 01:38:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:38:05.260112 | orchestrator | 2025-05-06 01:38:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:38:08.309268 | orchestrator | 2025-05-06 01:38:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:38:08.309532 | orchestrator | 2025-05-06 01:38:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:38:11.358286 | orchestrator | 2025-05-06 01:38:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:38:11.358461 | orchestrator | 2025-05-06 01:38:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:38:14.412491 | orchestrator | 2025-05-06 01:38:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:38:14.412640 | orchestrator | 2025-05-06 01:38:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:38:17.464791 | orchestrator | 2025-05-06 01:38:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:38:17.464944 | orchestrator | 2025-05-06 01:38:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:38:20.520186 | orchestrator | 2025-05-06 01:38:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:38:20.520325 | orchestrator | 2025-05-06 01:38:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:38:23.571347 | orchestrator | 2025-05-06 01:38:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:38:23.571610 | orchestrator | 2025-05-06 01:38:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:38:26.619165 | orchestrator | 2025-05-06 01:38:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:38:26.619318 | orchestrator | 2025-05-06 01:38:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:38:29.663051 | orchestrator | 2025-05-06 01:38:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:38:29.663194 | orchestrator | 2025-05-06 01:38:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:38:32.712134 | orchestrator | 2025-05-06 01:38:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:38:32.712284 | orchestrator | 2025-05-06 01:38:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:38:35.762641 | orchestrator | 2025-05-06 01:38:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:38:35.762797 | orchestrator | 2025-05-06 01:38:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:38:38.808717 | orchestrator | 2025-05-06 01:38:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:38:38.808862 | orchestrator | 2025-05-06 01:38:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:38:41.872005 | orchestrator | 2025-05-06 01:38:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:38:41.872156 | orchestrator | 2025-05-06 01:38:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:38:44.916810 | orchestrator | 2025-05-06 01:38:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:38:44.916950 | orchestrator | 2025-05-06 01:38:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:38:47.969623 | orchestrator | 2025-05-06 01:38:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:38:47.969775 | orchestrator | 2025-05-06 01:38:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:38:51.030091 | orchestrator | 2025-05-06 01:38:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:38:51.030235 | orchestrator | 2025-05-06 01:38:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:38:54.074699 | orchestrator | 2025-05-06 01:38:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:38:54.074879 | orchestrator | 2025-05-06 01:38:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:38:57.123198 | orchestrator | 2025-05-06 01:38:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:38:57.123398 | orchestrator | 2025-05-06 01:38:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:39:00.175089 | orchestrator | 2025-05-06 01:38:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:39:00.175243 | orchestrator | 2025-05-06 01:39:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:39:03.221306 | orchestrator | 2025-05-06 01:39:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:39:03.221489 | orchestrator | 2025-05-06 01:39:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:39:06.270118 | orchestrator | 2025-05-06 01:39:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:39:06.270271 | orchestrator | 2025-05-06 01:39:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:39:09.322798 | orchestrator | 2025-05-06 01:39:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:39:09.322939 | orchestrator | 2025-05-06 01:39:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:39:12.365503 | orchestrator | 2025-05-06 01:39:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:39:12.365653 | orchestrator | 2025-05-06 01:39:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:39:15.420527 | orchestrator | 2025-05-06 01:39:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:39:15.420628 | orchestrator | 2025-05-06 01:39:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:39:18.463808 | orchestrator | 2025-05-06 01:39:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:39:18.463962 | orchestrator | 2025-05-06 01:39:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:39:21.513726 | orchestrator | 2025-05-06 01:39:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:39:21.513870 | orchestrator | 2025-05-06 01:39:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:39:24.556489 | orchestrator | 2025-05-06 01:39:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:39:24.556644 | orchestrator | 2025-05-06 01:39:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:39:27.605938 | orchestrator | 2025-05-06 01:39:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:39:27.606159 | orchestrator | 2025-05-06 01:39:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:39:30.665098 | orchestrator | 2025-05-06 01:39:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:39:30.665239 | orchestrator | 2025-05-06 01:39:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:39:33.707642 | orchestrator | 2025-05-06 01:39:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:39:33.707785 | orchestrator | 2025-05-06 01:39:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:39:36.756604 | orchestrator | 2025-05-06 01:39:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:39:36.756768 | orchestrator | 2025-05-06 01:39:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:39:39.802070 | orchestrator | 2025-05-06 01:39:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:39:39.802265 | orchestrator | 2025-05-06 01:39:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:39:42.853612 | orchestrator | 2025-05-06 01:39:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:39:42.853763 | orchestrator | 2025-05-06 01:39:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:39:45.901835 | orchestrator | 2025-05-06 01:39:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:39:45.901981 | orchestrator | 2025-05-06 01:39:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:39:48.948855 | orchestrator | 2025-05-06 01:39:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:39:48.949012 | orchestrator | 2025-05-06 01:39:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:39:51.989883 | orchestrator | 2025-05-06 01:39:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:39:51.990099 | orchestrator | 2025-05-06 01:39:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:39:55.033925 | orchestrator | 2025-05-06 01:39:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:39:55.034154 | orchestrator | 2025-05-06 01:39:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:39:58.081478 | orchestrator | 2025-05-06 01:39:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:39:58.081636 | orchestrator | 2025-05-06 01:39:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:40:01.128702 | orchestrator | 2025-05-06 01:39:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:40:01.128849 | orchestrator | 2025-05-06 01:40:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:40:04.172606 | orchestrator | 2025-05-06 01:40:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:40:04.172748 | orchestrator | 2025-05-06 01:40:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:40:07.223906 | orchestrator | 2025-05-06 01:40:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:40:07.224062 | orchestrator | 2025-05-06 01:40:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:40:10.273633 | orchestrator | 2025-05-06 01:40:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:40:10.273779 | orchestrator | 2025-05-06 01:40:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:40:13.318671 | orchestrator | 2025-05-06 01:40:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:40:13.318821 | orchestrator | 2025-05-06 01:40:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:40:16.376735 | orchestrator | 2025-05-06 01:40:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:40:16.376880 | orchestrator | 2025-05-06 01:40:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:40:19.423866 | orchestrator | 2025-05-06 01:40:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:40:19.424016 | orchestrator | 2025-05-06 01:40:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:40:22.467820 | orchestrator | 2025-05-06 01:40:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:40:22.467959 | orchestrator | 2025-05-06 01:40:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:40:25.517243 | orchestrator | 2025-05-06 01:40:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:40:25.517488 | orchestrator | 2025-05-06 01:40:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:40:28.564862 | orchestrator | 2025-05-06 01:40:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:40:28.565017 | orchestrator | 2025-05-06 01:40:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:40:31.609430 | orchestrator | 2025-05-06 01:40:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:40:31.609584 | orchestrator | 2025-05-06 01:40:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:40:34.663372 | orchestrator | 2025-05-06 01:40:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:40:34.663548 | orchestrator | 2025-05-06 01:40:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:40:37.710696 | orchestrator | 2025-05-06 01:40:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:40:37.710848 | orchestrator | 2025-05-06 01:40:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:40:40.759636 | orchestrator | 2025-05-06 01:40:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:40:40.759779 | orchestrator | 2025-05-06 01:40:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:40:43.806209 | orchestrator | 2025-05-06 01:40:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:40:43.806414 | orchestrator | 2025-05-06 01:40:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:40:46.847383 | orchestrator | 2025-05-06 01:40:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:40:46.847525 | orchestrator | 2025-05-06 01:40:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:40:49.897263 | orchestrator | 2025-05-06 01:40:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:40:49.897482 | orchestrator | 2025-05-06 01:40:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:40:52.944616 | orchestrator | 2025-05-06 01:40:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:40:52.944757 | orchestrator | 2025-05-06 01:40:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:40:55.995147 | orchestrator | 2025-05-06 01:40:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:40:55.995395 | orchestrator | 2025-05-06 01:40:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:40:59.040253 | orchestrator | 2025-05-06 01:40:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:40:59.040457 | orchestrator | 2025-05-06 01:40:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:41:02.083746 | orchestrator | 2025-05-06 01:40:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:41:02.083899 | orchestrator | 2025-05-06 01:41:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:41:05.134921 | orchestrator | 2025-05-06 01:41:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:41:05.135070 | orchestrator | 2025-05-06 01:41:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:41:08.179554 | orchestrator | 2025-05-06 01:41:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:41:08.179708 | orchestrator | 2025-05-06 01:41:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:41:11.228060 | orchestrator | 2025-05-06 01:41:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:41:11.228241 | orchestrator | 2025-05-06 01:41:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:41:14.279050 | orchestrator | 2025-05-06 01:41:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:41:14.279196 | orchestrator | 2025-05-06 01:41:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:41:17.321431 | orchestrator | 2025-05-06 01:41:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:41:17.321598 | orchestrator | 2025-05-06 01:41:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:41:20.375729 | orchestrator | 2025-05-06 01:41:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:41:20.375882 | orchestrator | 2025-05-06 01:41:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:41:23.424042 | orchestrator | 2025-05-06 01:41:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:41:23.424187 | orchestrator | 2025-05-06 01:41:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:41:26.468214 | orchestrator | 2025-05-06 01:41:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:41:26.468401 | orchestrator | 2025-05-06 01:41:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:41:29.519353 | orchestrator | 2025-05-06 01:41:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:41:29.519438 | orchestrator | 2025-05-06 01:41:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:41:29.519812 | orchestrator | 2025-05-06 01:41:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:41:32.567462 | orchestrator | 2025-05-06 01:41:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:41:32.567703 | orchestrator | 2025-05-06 01:41:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:41:35.618516 | orchestrator | 2025-05-06 01:41:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:41:38.672152 | orchestrator | 2025-05-06 01:41:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:41:38.672352 | orchestrator | 2025-05-06 01:41:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:41:41.721682 | orchestrator | 2025-05-06 01:41:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:41:41.721880 | orchestrator | 2025-05-06 01:41:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:41:44.766744 | orchestrator | 2025-05-06 01:41:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:41:44.766886 | orchestrator | 2025-05-06 01:41:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:41:47.811705 | orchestrator | 2025-05-06 01:41:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:41:47.811852 | orchestrator | 2025-05-06 01:41:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:41:50.857699 | orchestrator | 2025-05-06 01:41:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:41:50.857960 | orchestrator | 2025-05-06 01:41:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:41:53.913352 | orchestrator | 2025-05-06 01:41:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:41:53.913509 | orchestrator | 2025-05-06 01:41:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:41:56.958240 | orchestrator | 2025-05-06 01:41:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:41:56.958464 | orchestrator | 2025-05-06 01:41:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:42:00.015611 | orchestrator | 2025-05-06 01:41:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:42:00.015764 | orchestrator | 2025-05-06 01:42:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:42:03.063200 | orchestrator | 2025-05-06 01:42:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:42:03.063385 | orchestrator | 2025-05-06 01:42:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:42:06.111964 | orchestrator | 2025-05-06 01:42:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:42:06.112113 | orchestrator | 2025-05-06 01:42:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:42:09.166875 | orchestrator | 2025-05-06 01:42:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:42:09.167006 | orchestrator | 2025-05-06 01:42:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:42:12.216353 | orchestrator | 2025-05-06 01:42:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:42:12.216509 | orchestrator | 2025-05-06 01:42:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:42:12.216655 | orchestrator | 2025-05-06 01:42:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:42:15.261885 | orchestrator | 2025-05-06 01:42:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:42:15.262440 | orchestrator | 2025-05-06 01:42:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:42:18.305414 | orchestrator | 2025-05-06 01:42:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:42:21.357469 | orchestrator | 2025-05-06 01:42:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:42:21.357617 | orchestrator | 2025-05-06 01:42:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:42:24.408885 | orchestrator | 2025-05-06 01:42:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:42:24.409038 | orchestrator | 2025-05-06 01:42:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:42:27.457039 | orchestrator | 2025-05-06 01:42:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:42:27.457183 | orchestrator | 2025-05-06 01:42:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:42:30.509895 | orchestrator | 2025-05-06 01:42:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:42:30.510120 | orchestrator | 2025-05-06 01:42:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:42:33.557960 | orchestrator | 2025-05-06 01:42:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:42:33.558154 | orchestrator | 2025-05-06 01:42:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:42:36.608522 | orchestrator | 2025-05-06 01:42:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:42:36.608666 | orchestrator | 2025-05-06 01:42:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:42:39.659724 | orchestrator | 2025-05-06 01:42:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:42:39.659868 | orchestrator | 2025-05-06 01:42:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:42:42.710419 | orchestrator | 2025-05-06 01:42:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:42:42.710597 | orchestrator | 2025-05-06 01:42:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:42:45.760615 | orchestrator | 2025-05-06 01:42:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:42:45.760760 | orchestrator | 2025-05-06 01:42:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:42:48.820698 | orchestrator | 2025-05-06 01:42:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:42:48.820849 | orchestrator | 2025-05-06 01:42:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:42:51.869816 | orchestrator | 2025-05-06 01:42:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:42:51.869937 | orchestrator | 2025-05-06 01:42:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:42:54.937333 | orchestrator | 2025-05-06 01:42:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:42:54.937493 | orchestrator | 2025-05-06 01:42:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:42:57.985458 | orchestrator | 2025-05-06 01:42:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:42:57.985597 | orchestrator | 2025-05-06 01:42:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:43:01.034373 | orchestrator | 2025-05-06 01:42:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:43:01.034527 | orchestrator | 2025-05-06 01:43:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:43:04.089616 | orchestrator | 2025-05-06 01:43:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:43:04.089745 | orchestrator | 2025-05-06 01:43:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:43:07.133025 | orchestrator | 2025-05-06 01:43:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:43:07.133171 | orchestrator | 2025-05-06 01:43:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:43:10.184531 | orchestrator | 2025-05-06 01:43:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:43:10.184677 | orchestrator | 2025-05-06 01:43:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:43:13.240590 | orchestrator | 2025-05-06 01:43:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:43:13.240743 | orchestrator | 2025-05-06 01:43:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:43:16.288351 | orchestrator | 2025-05-06 01:43:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:43:16.288504 | orchestrator | 2025-05-06 01:43:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:43:19.344515 | orchestrator | 2025-05-06 01:43:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:43:19.344666 | orchestrator | 2025-05-06 01:43:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:43:22.390178 | orchestrator | 2025-05-06 01:43:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:43:22.390361 | orchestrator | 2025-05-06 01:43:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:43:25.440800 | orchestrator | 2025-05-06 01:43:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:43:25.440945 | orchestrator | 2025-05-06 01:43:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:43:28.490940 | orchestrator | 2025-05-06 01:43:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:43:28.491150 | orchestrator | 2025-05-06 01:43:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:43:31.530965 | orchestrator | 2025-05-06 01:43:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:43:31.531117 | orchestrator | 2025-05-06 01:43:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:43:34.584893 | orchestrator | 2025-05-06 01:43:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:43:34.585037 | orchestrator | 2025-05-06 01:43:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:43:37.639921 | orchestrator | 2025-05-06 01:43:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:43:37.640066 | orchestrator | 2025-05-06 01:43:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:43:40.696641 | orchestrator | 2025-05-06 01:43:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:43:40.696782 | orchestrator | 2025-05-06 01:43:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:43:43.751513 | orchestrator | 2025-05-06 01:43:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:43:43.751657 | orchestrator | 2025-05-06 01:43:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:43:46.805936 | orchestrator | 2025-05-06 01:43:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:43:46.806070 | orchestrator | 2025-05-06 01:43:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:43:49.855909 | orchestrator | 2025-05-06 01:43:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:43:49.856077 | orchestrator | 2025-05-06 01:43:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:43:52.908596 | orchestrator | 2025-05-06 01:43:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:43:52.908742 | orchestrator | 2025-05-06 01:43:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:43:52.910343 | orchestrator | 2025-05-06 01:43:52 | INFO  | Task 3ce49702-0aa1-4f45-a6a5-6f89cdefe018 is in state STARTED 2025-05-06 01:43:55.972707 | orchestrator | 2025-05-06 01:43:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:43:55.972853 | orchestrator | 2025-05-06 01:43:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:43:55.974898 | orchestrator | 2025-05-06 01:43:55 | INFO  | Task 3ce49702-0aa1-4f45-a6a5-6f89cdefe018 is in state STARTED 2025-05-06 01:43:59.039306 | orchestrator | 2025-05-06 01:43:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:43:59.039452 | orchestrator | 2025-05-06 01:43:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:43:59.040069 | orchestrator | 2025-05-06 01:43:59 | INFO  | Task 3ce49702-0aa1-4f45-a6a5-6f89cdefe018 is in state STARTED 2025-05-06 01:44:02.094385 | orchestrator | 2025-05-06 01:43:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:44:02.094533 | orchestrator | 2025-05-06 01:44:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:44:02.095414 | orchestrator | 2025-05-06 01:44:02 | INFO  | Task 3ce49702-0aa1-4f45-a6a5-6f89cdefe018 is in state STARTED 2025-05-06 01:44:05.147927 | orchestrator | 2025-05-06 01:44:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:44:05.148073 | orchestrator | 2025-05-06 01:44:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:44:05.148889 | orchestrator | 2025-05-06 01:44:05 | INFO  | Task 3ce49702-0aa1-4f45-a6a5-6f89cdefe018 is in state SUCCESS 2025-05-06 01:44:08.199714 | orchestrator | 2025-05-06 01:44:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:44:08.199858 | orchestrator | 2025-05-06 01:44:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:44:11.247789 | orchestrator | 2025-05-06 01:44:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:44:11.247898 | orchestrator | 2025-05-06 01:44:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:44:14.300432 | orchestrator | 2025-05-06 01:44:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:44:14.300580 | orchestrator | 2025-05-06 01:44:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:44:17.348885 | orchestrator | 2025-05-06 01:44:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:44:17.349029 | orchestrator | 2025-05-06 01:44:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:44:20.396443 | orchestrator | 2025-05-06 01:44:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:44:20.396605 | orchestrator | 2025-05-06 01:44:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:44:23.444968 | orchestrator | 2025-05-06 01:44:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:44:23.445114 | orchestrator | 2025-05-06 01:44:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:44:26.500057 | orchestrator | 2025-05-06 01:44:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:44:26.500202 | orchestrator | 2025-05-06 01:44:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:44:29.552509 | orchestrator | 2025-05-06 01:44:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:44:29.552650 | orchestrator | 2025-05-06 01:44:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:44:32.604057 | orchestrator | 2025-05-06 01:44:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:44:32.604214 | orchestrator | 2025-05-06 01:44:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:44:35.653288 | orchestrator | 2025-05-06 01:44:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:44:35.653491 | orchestrator | 2025-05-06 01:44:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:44:38.702933 | orchestrator | 2025-05-06 01:44:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:44:38.703081 | orchestrator | 2025-05-06 01:44:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:44:41.752418 | orchestrator | 2025-05-06 01:44:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:44:41.752581 | orchestrator | 2025-05-06 01:44:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:44:41.752764 | orchestrator | 2025-05-06 01:44:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:44:44.798618 | orchestrator | 2025-05-06 01:44:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:44:47.843723 | orchestrator | 2025-05-06 01:44:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:44:47.843938 | orchestrator | 2025-05-06 01:44:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:44:50.889812 | orchestrator | 2025-05-06 01:44:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:44:50.889969 | orchestrator | 2025-05-06 01:44:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:44:53.932259 | orchestrator | 2025-05-06 01:44:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:44:53.932410 | orchestrator | 2025-05-06 01:44:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:44:56.986276 | orchestrator | 2025-05-06 01:44:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:44:56.986427 | orchestrator | 2025-05-06 01:44:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:45:00.040283 | orchestrator | 2025-05-06 01:44:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:45:00.040425 | orchestrator | 2025-05-06 01:45:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:45:03.091154 | orchestrator | 2025-05-06 01:45:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:45:03.091345 | orchestrator | 2025-05-06 01:45:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:45:06.139299 | orchestrator | 2025-05-06 01:45:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:45:06.139448 | orchestrator | 2025-05-06 01:45:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:45:09.188701 | orchestrator | 2025-05-06 01:45:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:45:09.188851 | orchestrator | 2025-05-06 01:45:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:45:12.241898 | orchestrator | 2025-05-06 01:45:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:45:12.242142 | orchestrator | 2025-05-06 01:45:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:45:12.242453 | orchestrator | 2025-05-06 01:45:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:45:15.286464 | orchestrator | 2025-05-06 01:45:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:45:18.330605 | orchestrator | 2025-05-06 01:45:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:45:18.330752 | orchestrator | 2025-05-06 01:45:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:45:21.363339 | orchestrator | 2025-05-06 01:45:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:45:21.363486 | orchestrator | 2025-05-06 01:45:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:45:24.416188 | orchestrator | 2025-05-06 01:45:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:45:24.416386 | orchestrator | 2025-05-06 01:45:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:45:27.468017 | orchestrator | 2025-05-06 01:45:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:45:27.468165 | orchestrator | 2025-05-06 01:45:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:45:30.512197 | orchestrator | 2025-05-06 01:45:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:45:30.512399 | orchestrator | 2025-05-06 01:45:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:45:33.558159 | orchestrator | 2025-05-06 01:45:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:45:33.558386 | orchestrator | 2025-05-06 01:45:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:45:36.606668 | orchestrator | 2025-05-06 01:45:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:45:36.606853 | orchestrator | 2025-05-06 01:45:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:45:39.650521 | orchestrator | 2025-05-06 01:45:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:45:39.650684 | orchestrator | 2025-05-06 01:45:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:45:42.697901 | orchestrator | 2025-05-06 01:45:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:45:42.698134 | orchestrator | 2025-05-06 01:45:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:45:45.748940 | orchestrator | 2025-05-06 01:45:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:45:45.749087 | orchestrator | 2025-05-06 01:45:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:45:48.797458 | orchestrator | 2025-05-06 01:45:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:45:48.797612 | orchestrator | 2025-05-06 01:45:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:45:51.846128 | orchestrator | 2025-05-06 01:45:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:45:51.846392 | orchestrator | 2025-05-06 01:45:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:45:54.896297 | orchestrator | 2025-05-06 01:45:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:45:54.896462 | orchestrator | 2025-05-06 01:45:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:45:57.943638 | orchestrator | 2025-05-06 01:45:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:45:57.943809 | orchestrator | 2025-05-06 01:45:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:46:00.993485 | orchestrator | 2025-05-06 01:45:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:46:00.993634 | orchestrator | 2025-05-06 01:46:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:46:04.042777 | orchestrator | 2025-05-06 01:46:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:46:04.042929 | orchestrator | 2025-05-06 01:46:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:46:07.094690 | orchestrator | 2025-05-06 01:46:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:46:07.094836 | orchestrator | 2025-05-06 01:46:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:46:10.138909 | orchestrator | 2025-05-06 01:46:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:46:10.139068 | orchestrator | 2025-05-06 01:46:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:46:13.191550 | orchestrator | 2025-05-06 01:46:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:46:13.191722 | orchestrator | 2025-05-06 01:46:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:46:16.243002 | orchestrator | 2025-05-06 01:46:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:46:16.243306 | orchestrator | 2025-05-06 01:46:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:46:19.284549 | orchestrator | 2025-05-06 01:46:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:46:19.284690 | orchestrator | 2025-05-06 01:46:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:46:22.334127 | orchestrator | 2025-05-06 01:46:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:46:22.334355 | orchestrator | 2025-05-06 01:46:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:46:25.383343 | orchestrator | 2025-05-06 01:46:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:46:25.383498 | orchestrator | 2025-05-06 01:46:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:46:28.433865 | orchestrator | 2025-05-06 01:46:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:46:28.434077 | orchestrator | 2025-05-06 01:46:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:46:31.479679 | orchestrator | 2025-05-06 01:46:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:46:31.479821 | orchestrator | 2025-05-06 01:46:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:46:34.523325 | orchestrator | 2025-05-06 01:46:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:46:34.523476 | orchestrator | 2025-05-06 01:46:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:46:37.565078 | orchestrator | 2025-05-06 01:46:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:46:37.565315 | orchestrator | 2025-05-06 01:46:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:46:40.613557 | orchestrator | 2025-05-06 01:46:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:46:40.613727 | orchestrator | 2025-05-06 01:46:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:46:43.664343 | orchestrator | 2025-05-06 01:46:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:46:43.664484 | orchestrator | 2025-05-06 01:46:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:46:46.706792 | orchestrator | 2025-05-06 01:46:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:46:46.706945 | orchestrator | 2025-05-06 01:46:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:46:49.762812 | orchestrator | 2025-05-06 01:46:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:46:49.762952 | orchestrator | 2025-05-06 01:46:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:46:52.811789 | orchestrator | 2025-05-06 01:46:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:46:52.811937 | orchestrator | 2025-05-06 01:46:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:46:55.863721 | orchestrator | 2025-05-06 01:46:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:46:55.863862 | orchestrator | 2025-05-06 01:46:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:46:58.915947 | orchestrator | 2025-05-06 01:46:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:46:58.916097 | orchestrator | 2025-05-06 01:46:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:46:58.916273 | orchestrator | 2025-05-06 01:46:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:47:01.964986 | orchestrator | 2025-05-06 01:47:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:47:05.014209 | orchestrator | 2025-05-06 01:47:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:47:05.014325 | orchestrator | 2025-05-06 01:47:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:47:08.055980 | orchestrator | 2025-05-06 01:47:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:47:08.056163 | orchestrator | 2025-05-06 01:47:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:47:11.100865 | orchestrator | 2025-05-06 01:47:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:47:11.101034 | orchestrator | 2025-05-06 01:47:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:47:14.151276 | orchestrator | 2025-05-06 01:47:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:47:14.151415 | orchestrator | 2025-05-06 01:47:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:47:17.199888 | orchestrator | 2025-05-06 01:47:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:47:17.200086 | orchestrator | 2025-05-06 01:47:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:47:20.250586 | orchestrator | 2025-05-06 01:47:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:47:20.250736 | orchestrator | 2025-05-06 01:47:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:47:23.303251 | orchestrator | 2025-05-06 01:47:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:47:23.303399 | orchestrator | 2025-05-06 01:47:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:47:26.359116 | orchestrator | 2025-05-06 01:47:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:47:26.359259 | orchestrator | 2025-05-06 01:47:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:47:29.406572 | orchestrator | 2025-05-06 01:47:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:47:29.406749 | orchestrator | 2025-05-06 01:47:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:47:32.455193 | orchestrator | 2025-05-06 01:47:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:47:32.455370 | orchestrator | 2025-05-06 01:47:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:47:35.503429 | orchestrator | 2025-05-06 01:47:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:47:35.503640 | orchestrator | 2025-05-06 01:47:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:47:38.550910 | orchestrator | 2025-05-06 01:47:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:47:38.551106 | orchestrator | 2025-05-06 01:47:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:47:41.601401 | orchestrator | 2025-05-06 01:47:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:47:41.601511 | orchestrator | 2025-05-06 01:47:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:47:44.648075 | orchestrator | 2025-05-06 01:47:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:47:44.648220 | orchestrator | 2025-05-06 01:47:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:47:47.693134 | orchestrator | 2025-05-06 01:47:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:47:47.693290 | orchestrator | 2025-05-06 01:47:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:47:50.733185 | orchestrator | 2025-05-06 01:47:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:47:50.733370 | orchestrator | 2025-05-06 01:47:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:47:50.733456 | orchestrator | 2025-05-06 01:47:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:47:53.789262 | orchestrator | 2025-05-06 01:47:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:47:56.842272 | orchestrator | 2025-05-06 01:47:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:47:56.842413 | orchestrator | 2025-05-06 01:47:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:47:59.895216 | orchestrator | 2025-05-06 01:47:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:47:59.895370 | orchestrator | 2025-05-06 01:47:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:48:02.946622 | orchestrator | 2025-05-06 01:47:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:48:02.946816 | orchestrator | 2025-05-06 01:48:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:48:05.997077 | orchestrator | 2025-05-06 01:48:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:48:05.997201 | orchestrator | 2025-05-06 01:48:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:48:05.997256 | orchestrator | 2025-05-06 01:48:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:48:09.051362 | orchestrator | 2025-05-06 01:48:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:48:12.103659 | orchestrator | 2025-05-06 01:48:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:48:12.103880 | orchestrator | 2025-05-06 01:48:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:48:15.161296 | orchestrator | 2025-05-06 01:48:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:48:15.161446 | orchestrator | 2025-05-06 01:48:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:48:18.204608 | orchestrator | 2025-05-06 01:48:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:48:18.204768 | orchestrator | 2025-05-06 01:48:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:48:21.247621 | orchestrator | 2025-05-06 01:48:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:48:21.247768 | orchestrator | 2025-05-06 01:48:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:48:24.298817 | orchestrator | 2025-05-06 01:48:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:48:24.298993 | orchestrator | 2025-05-06 01:48:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:48:27.352401 | orchestrator | 2025-05-06 01:48:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:48:27.352548 | orchestrator | 2025-05-06 01:48:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:48:30.400374 | orchestrator | 2025-05-06 01:48:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:48:30.400496 | orchestrator | 2025-05-06 01:48:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:48:33.454336 | orchestrator | 2025-05-06 01:48:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:48:33.454476 | orchestrator | 2025-05-06 01:48:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:48:36.494300 | orchestrator | 2025-05-06 01:48:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:48:36.494477 | orchestrator | 2025-05-06 01:48:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:48:39.546901 | orchestrator | 2025-05-06 01:48:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:48:39.547158 | orchestrator | 2025-05-06 01:48:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:48:42.601494 | orchestrator | 2025-05-06 01:48:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:48:42.601675 | orchestrator | 2025-05-06 01:48:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:48:45.652064 | orchestrator | 2025-05-06 01:48:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:48:45.652237 | orchestrator | 2025-05-06 01:48:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:48:48.703177 | orchestrator | 2025-05-06 01:48:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:48:48.703354 | orchestrator | 2025-05-06 01:48:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:48:51.757110 | orchestrator | 2025-05-06 01:48:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:48:51.757277 | orchestrator | 2025-05-06 01:48:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:48:54.801690 | orchestrator | 2025-05-06 01:48:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:48:54.801857 | orchestrator | 2025-05-06 01:48:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:48:57.856435 | orchestrator | 2025-05-06 01:48:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:48:57.856581 | orchestrator | 2025-05-06 01:48:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:49:00.907200 | orchestrator | 2025-05-06 01:48:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:49:00.907347 | orchestrator | 2025-05-06 01:49:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:49:03.961307 | orchestrator | 2025-05-06 01:49:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:49:03.961443 | orchestrator | 2025-05-06 01:49:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:49:07.012509 | orchestrator | 2025-05-06 01:49:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:49:07.012658 | orchestrator | 2025-05-06 01:49:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:49:07.012828 | orchestrator | 2025-05-06 01:49:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:49:10.066743 | orchestrator | 2025-05-06 01:49:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:49:13.113254 | orchestrator | 2025-05-06 01:49:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:49:13.113402 | orchestrator | 2025-05-06 01:49:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:49:16.163795 | orchestrator | 2025-05-06 01:49:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:49:16.163943 | orchestrator | 2025-05-06 01:49:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:49:19.225521 | orchestrator | 2025-05-06 01:49:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:49:19.225679 | orchestrator | 2025-05-06 01:49:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:49:22.282221 | orchestrator | 2025-05-06 01:49:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:49:22.282369 | orchestrator | 2025-05-06 01:49:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:49:25.329765 | orchestrator | 2025-05-06 01:49:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:49:25.329915 | orchestrator | 2025-05-06 01:49:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:49:28.376523 | orchestrator | 2025-05-06 01:49:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:49:28.376672 | orchestrator | 2025-05-06 01:49:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:49:28.376822 | orchestrator | 2025-05-06 01:49:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:49:31.437880 | orchestrator | 2025-05-06 01:49:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:49:34.481893 | orchestrator | 2025-05-06 01:49:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:49:34.482120 | orchestrator | 2025-05-06 01:49:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:49:37.539658 | orchestrator | 2025-05-06 01:49:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:49:37.539803 | orchestrator | 2025-05-06 01:49:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:49:40.585533 | orchestrator | 2025-05-06 01:49:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:49:40.585687 | orchestrator | 2025-05-06 01:49:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:49:43.637855 | orchestrator | 2025-05-06 01:49:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:49:43.637997 | orchestrator | 2025-05-06 01:49:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:49:46.686551 | orchestrator | 2025-05-06 01:49:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:49:46.686698 | orchestrator | 2025-05-06 01:49:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:49:49.741918 | orchestrator | 2025-05-06 01:49:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:49:49.742147 | orchestrator | 2025-05-06 01:49:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:49:52.789963 | orchestrator | 2025-05-06 01:49:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:49:52.790170 | orchestrator | 2025-05-06 01:49:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:49:55.845147 | orchestrator | 2025-05-06 01:49:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:49:55.845284 | orchestrator | 2025-05-06 01:49:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:49:58.893839 | orchestrator | 2025-05-06 01:49:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:49:58.893980 | orchestrator | 2025-05-06 01:49:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:50:01.948335 | orchestrator | 2025-05-06 01:49:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:50:01.948547 | orchestrator | 2025-05-06 01:50:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:50:04.997484 | orchestrator | 2025-05-06 01:50:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:50:04.997632 | orchestrator | 2025-05-06 01:50:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:50:08.042657 | orchestrator | 2025-05-06 01:50:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:50:08.042813 | orchestrator | 2025-05-06 01:50:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:50:11.086317 | orchestrator | 2025-05-06 01:50:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:50:11.086495 | orchestrator | 2025-05-06 01:50:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:50:14.142288 | orchestrator | 2025-05-06 01:50:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:50:14.142491 | orchestrator | 2025-05-06 01:50:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:50:17.193287 | orchestrator | 2025-05-06 01:50:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:50:17.193434 | orchestrator | 2025-05-06 01:50:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:50:20.247415 | orchestrator | 2025-05-06 01:50:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:50:20.247627 | orchestrator | 2025-05-06 01:50:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:50:23.297111 | orchestrator | 2025-05-06 01:50:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:50:23.297256 | orchestrator | 2025-05-06 01:50:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:50:26.343207 | orchestrator | 2025-05-06 01:50:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:50:26.343372 | orchestrator | 2025-05-06 01:50:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:50:29.392276 | orchestrator | 2025-05-06 01:50:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:50:29.392417 | orchestrator | 2025-05-06 01:50:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:50:32.442358 | orchestrator | 2025-05-06 01:50:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:50:32.442558 | orchestrator | 2025-05-06 01:50:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:50:35.502165 | orchestrator | 2025-05-06 01:50:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:50:35.502305 | orchestrator | 2025-05-06 01:50:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:50:38.552994 | orchestrator | 2025-05-06 01:50:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:50:38.553143 | orchestrator | 2025-05-06 01:50:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:50:41.598872 | orchestrator | 2025-05-06 01:50:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:50:41.599016 | orchestrator | 2025-05-06 01:50:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:50:44.643607 | orchestrator | 2025-05-06 01:50:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:50:44.643784 | orchestrator | 2025-05-06 01:50:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:50:47.696247 | orchestrator | 2025-05-06 01:50:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:50:47.696389 | orchestrator | 2025-05-06 01:50:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:50:50.748082 | orchestrator | 2025-05-06 01:50:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:50:50.748228 | orchestrator | 2025-05-06 01:50:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:50:53.797309 | orchestrator | 2025-05-06 01:50:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:50:53.797470 | orchestrator | 2025-05-06 01:50:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:50:56.843480 | orchestrator | 2025-05-06 01:50:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:50:56.843685 | orchestrator | 2025-05-06 01:50:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:50:59.891725 | orchestrator | 2025-05-06 01:50:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:50:59.891865 | orchestrator | 2025-05-06 01:50:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:51:02.940465 | orchestrator | 2025-05-06 01:50:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:51:02.940622 | orchestrator | 2025-05-06 01:51:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:51:05.991934 | orchestrator | 2025-05-06 01:51:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:51:05.992080 | orchestrator | 2025-05-06 01:51:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:51:09.039437 | orchestrator | 2025-05-06 01:51:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:51:09.039586 | orchestrator | 2025-05-06 01:51:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:51:12.083663 | orchestrator | 2025-05-06 01:51:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:51:12.083848 | orchestrator | 2025-05-06 01:51:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:51:15.125823 | orchestrator | 2025-05-06 01:51:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:51:15.125975 | orchestrator | 2025-05-06 01:51:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:51:18.183816 | orchestrator | 2025-05-06 01:51:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:51:18.183964 | orchestrator | 2025-05-06 01:51:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:51:21.234679 | orchestrator | 2025-05-06 01:51:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:51:21.234882 | orchestrator | 2025-05-06 01:51:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:51:24.283446 | orchestrator | 2025-05-06 01:51:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:51:24.283597 | orchestrator | 2025-05-06 01:51:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:51:27.332971 | orchestrator | 2025-05-06 01:51:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:51:27.333120 | orchestrator | 2025-05-06 01:51:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:51:30.387532 | orchestrator | 2025-05-06 01:51:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:51:30.387672 | orchestrator | 2025-05-06 01:51:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:51:33.432495 | orchestrator | 2025-05-06 01:51:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:51:33.432647 | orchestrator | 2025-05-06 01:51:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:51:36.486955 | orchestrator | 2025-05-06 01:51:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:51:36.487100 | orchestrator | 2025-05-06 01:51:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:51:36.487423 | orchestrator | 2025-05-06 01:51:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:51:39.527664 | orchestrator | 2025-05-06 01:51:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:51:42.577192 | orchestrator | 2025-05-06 01:51:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:51:42.577371 | orchestrator | 2025-05-06 01:51:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:51:45.636181 | orchestrator | 2025-05-06 01:51:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:51:45.636338 | orchestrator | 2025-05-06 01:51:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:51:48.682078 | orchestrator | 2025-05-06 01:51:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:51:48.682232 | orchestrator | 2025-05-06 01:51:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:51:51.728439 | orchestrator | 2025-05-06 01:51:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:51:51.728589 | orchestrator | 2025-05-06 01:51:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:51:54.779284 | orchestrator | 2025-05-06 01:51:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:51:54.779431 | orchestrator | 2025-05-06 01:51:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:51:57.828842 | orchestrator | 2025-05-06 01:51:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:51:57.828972 | orchestrator | 2025-05-06 01:51:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:52:00.877435 | orchestrator | 2025-05-06 01:51:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:52:00.877577 | orchestrator | 2025-05-06 01:52:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:52:03.922307 | orchestrator | 2025-05-06 01:52:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:52:03.922456 | orchestrator | 2025-05-06 01:52:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:52:06.963139 | orchestrator | 2025-05-06 01:52:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:52:06.963277 | orchestrator | 2025-05-06 01:52:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:52:10.015242 | orchestrator | 2025-05-06 01:52:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:52:10.015456 | orchestrator | 2025-05-06 01:52:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:52:13.066146 | orchestrator | 2025-05-06 01:52:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:52:13.066296 | orchestrator | 2025-05-06 01:52:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:52:16.112874 | orchestrator | 2025-05-06 01:52:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:52:16.113066 | orchestrator | 2025-05-06 01:52:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:52:19.153225 | orchestrator | 2025-05-06 01:52:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:52:19.153364 | orchestrator | 2025-05-06 01:52:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:52:22.191003 | orchestrator | 2025-05-06 01:52:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:52:22.191180 | orchestrator | 2025-05-06 01:52:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:52:25.235890 | orchestrator | 2025-05-06 01:52:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:52:25.236105 | orchestrator | 2025-05-06 01:52:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:52:28.286807 | orchestrator | 2025-05-06 01:52:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:52:28.286955 | orchestrator | 2025-05-06 01:52:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:52:31.335898 | orchestrator | 2025-05-06 01:52:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:52:31.336087 | orchestrator | 2025-05-06 01:52:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:52:34.385747 | orchestrator | 2025-05-06 01:52:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:52:34.385900 | orchestrator | 2025-05-06 01:52:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:52:37.432898 | orchestrator | 2025-05-06 01:52:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:52:37.433118 | orchestrator | 2025-05-06 01:52:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:52:40.481314 | orchestrator | 2025-05-06 01:52:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:52:40.481458 | orchestrator | 2025-05-06 01:52:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:52:43.522857 | orchestrator | 2025-05-06 01:52:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:52:43.522997 | orchestrator | 2025-05-06 01:52:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:52:46.576597 | orchestrator | 2025-05-06 01:52:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:52:46.576754 | orchestrator | 2025-05-06 01:52:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:52:49.625693 | orchestrator | 2025-05-06 01:52:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:52:49.625839 | orchestrator | 2025-05-06 01:52:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:52:52.677163 | orchestrator | 2025-05-06 01:52:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:52:52.677346 | orchestrator | 2025-05-06 01:52:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:52:55.723872 | orchestrator | 2025-05-06 01:52:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:52:55.724022 | orchestrator | 2025-05-06 01:52:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:52:58.771195 | orchestrator | 2025-05-06 01:52:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:52:58.771346 | orchestrator | 2025-05-06 01:52:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:53:01.818281 | orchestrator | 2025-05-06 01:52:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:53:01.818415 | orchestrator | 2025-05-06 01:53:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:53:04.865691 | orchestrator | 2025-05-06 01:53:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:53:04.865841 | orchestrator | 2025-05-06 01:53:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:53:07.922011 | orchestrator | 2025-05-06 01:53:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:53:07.922256 | orchestrator | 2025-05-06 01:53:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:53:10.968481 | orchestrator | 2025-05-06 01:53:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:53:10.968635 | orchestrator | 2025-05-06 01:53:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:53:14.019235 | orchestrator | 2025-05-06 01:53:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:53:14.019406 | orchestrator | 2025-05-06 01:53:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:53:17.063328 | orchestrator | 2025-05-06 01:53:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:53:17.063482 | orchestrator | 2025-05-06 01:53:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:53:20.111140 | orchestrator | 2025-05-06 01:53:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:53:20.111337 | orchestrator | 2025-05-06 01:53:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:53:23.165057 | orchestrator | 2025-05-06 01:53:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:53:23.165227 | orchestrator | 2025-05-06 01:53:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:53:26.211671 | orchestrator | 2025-05-06 01:53:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:53:26.211826 | orchestrator | 2025-05-06 01:53:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:53:29.261000 | orchestrator | 2025-05-06 01:53:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:53:29.261144 | orchestrator | 2025-05-06 01:53:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:53:32.305420 | orchestrator | 2025-05-06 01:53:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:53:32.305566 | orchestrator | 2025-05-06 01:53:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:53:35.351972 | orchestrator | 2025-05-06 01:53:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:53:35.352163 | orchestrator | 2025-05-06 01:53:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:53:38.396132 | orchestrator | 2025-05-06 01:53:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:53:38.396334 | orchestrator | 2025-05-06 01:53:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:53:41.445608 | orchestrator | 2025-05-06 01:53:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:53:41.445776 | orchestrator | 2025-05-06 01:53:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:53:44.487019 | orchestrator | 2025-05-06 01:53:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:53:44.487163 | orchestrator | 2025-05-06 01:53:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:53:47.536625 | orchestrator | 2025-05-06 01:53:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:53:47.536782 | orchestrator | 2025-05-06 01:53:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:53:50.588070 | orchestrator | 2025-05-06 01:53:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:53:50.588213 | orchestrator | 2025-05-06 01:53:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:53:53.640822 | orchestrator | 2025-05-06 01:53:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:53:53.641009 | orchestrator | 2025-05-06 01:53:53 | INFO  | Task b9b68927-b032-4802-91e3-667d3e009e36 is in state STARTED 2025-05-06 01:53:53.641887 | orchestrator | 2025-05-06 01:53:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:53:53.642006 | orchestrator | 2025-05-06 01:53:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:53:56.706682 | orchestrator | 2025-05-06 01:53:56 | INFO  | Task b9b68927-b032-4802-91e3-667d3e009e36 is in state STARTED 2025-05-06 01:53:56.708010 | orchestrator | 2025-05-06 01:53:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:53:59.774119 | orchestrator | 2025-05-06 01:53:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:53:59.774270 | orchestrator | 2025-05-06 01:53:59 | INFO  | Task b9b68927-b032-4802-91e3-667d3e009e36 is in state STARTED 2025-05-06 01:53:59.776932 | orchestrator | 2025-05-06 01:53:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:54:02.829505 | orchestrator | 2025-05-06 01:53:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:54:02.829642 | orchestrator | 2025-05-06 01:54:02 | INFO  | Task b9b68927-b032-4802-91e3-667d3e009e36 is in state STARTED 2025-05-06 01:54:02.830365 | orchestrator | 2025-05-06 01:54:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:54:05.885621 | orchestrator | 2025-05-06 01:54:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:54:05.885819 | orchestrator | 2025-05-06 01:54:05 | INFO  | Task b9b68927-b032-4802-91e3-667d3e009e36 is in state SUCCESS 2025-05-06 01:54:05.887081 | orchestrator | 2025-05-06 01:54:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:54:05.887355 | orchestrator | 2025-05-06 01:54:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:54:08.934433 | orchestrator | 2025-05-06 01:54:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:54:11.980599 | orchestrator | 2025-05-06 01:54:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:54:11.980749 | orchestrator | 2025-05-06 01:54:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:54:15.036310 | orchestrator | 2025-05-06 01:54:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:54:15.036619 | orchestrator | 2025-05-06 01:54:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:54:18.086079 | orchestrator | 2025-05-06 01:54:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:54:18.086234 | orchestrator | 2025-05-06 01:54:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:54:21.134801 | orchestrator | 2025-05-06 01:54:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:54:21.134946 | orchestrator | 2025-05-06 01:54:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:54:24.190086 | orchestrator | 2025-05-06 01:54:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:54:24.190232 | orchestrator | 2025-05-06 01:54:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:54:27.236584 | orchestrator | 2025-05-06 01:54:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:54:27.236729 | orchestrator | 2025-05-06 01:54:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:54:30.286678 | orchestrator | 2025-05-06 01:54:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:54:30.286821 | orchestrator | 2025-05-06 01:54:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:54:33.334817 | orchestrator | 2025-05-06 01:54:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:54:33.334961 | orchestrator | 2025-05-06 01:54:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:54:36.378823 | orchestrator | 2025-05-06 01:54:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:54:36.378978 | orchestrator | 2025-05-06 01:54:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:54:36.379123 | orchestrator | 2025-05-06 01:54:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:54:39.428855 | orchestrator | 2025-05-06 01:54:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:54:42.477253 | orchestrator | 2025-05-06 01:54:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:54:42.477493 | orchestrator | 2025-05-06 01:54:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:54:45.532312 | orchestrator | 2025-05-06 01:54:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:54:45.532532 | orchestrator | 2025-05-06 01:54:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:54:48.576732 | orchestrator | 2025-05-06 01:54:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:54:48.576885 | orchestrator | 2025-05-06 01:54:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:54:51.624272 | orchestrator | 2025-05-06 01:54:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:54:51.624417 | orchestrator | 2025-05-06 01:54:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:54:54.672870 | orchestrator | 2025-05-06 01:54:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:54:54.673012 | orchestrator | 2025-05-06 01:54:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:54:57.722880 | orchestrator | 2025-05-06 01:54:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:54:57.723025 | orchestrator | 2025-05-06 01:54:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:55:00.773316 | orchestrator | 2025-05-06 01:54:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:55:00.773511 | orchestrator | 2025-05-06 01:55:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:55:03.823199 | orchestrator | 2025-05-06 01:55:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:55:03.823315 | orchestrator | 2025-05-06 01:55:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:55:06.871646 | orchestrator | 2025-05-06 01:55:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:55:06.871811 | orchestrator | 2025-05-06 01:55:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:55:09.923430 | orchestrator | 2025-05-06 01:55:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:55:09.923620 | orchestrator | 2025-05-06 01:55:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:55:12.967038 | orchestrator | 2025-05-06 01:55:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:55:12.967184 | orchestrator | 2025-05-06 01:55:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:55:16.023409 | orchestrator | 2025-05-06 01:55:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:55:16.023559 | orchestrator | 2025-05-06 01:55:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:55:19.074791 | orchestrator | 2025-05-06 01:55:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:55:19.074948 | orchestrator | 2025-05-06 01:55:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:55:22.126985 | orchestrator | 2025-05-06 01:55:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:55:22.127163 | orchestrator | 2025-05-06 01:55:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:55:25.183342 | orchestrator | 2025-05-06 01:55:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:55:25.183483 | orchestrator | 2025-05-06 01:55:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:55:28.233294 | orchestrator | 2025-05-06 01:55:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:55:28.233441 | orchestrator | 2025-05-06 01:55:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:55:31.280514 | orchestrator | 2025-05-06 01:55:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:55:31.280756 | orchestrator | 2025-05-06 01:55:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:55:34.327986 | orchestrator | 2025-05-06 01:55:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:55:34.328130 | orchestrator | 2025-05-06 01:55:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:55:37.372722 | orchestrator | 2025-05-06 01:55:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:55:37.372873 | orchestrator | 2025-05-06 01:55:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:55:40.426475 | orchestrator | 2025-05-06 01:55:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:55:40.426668 | orchestrator | 2025-05-06 01:55:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:55:43.472979 | orchestrator | 2025-05-06 01:55:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:55:43.473128 | orchestrator | 2025-05-06 01:55:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:55:46.512306 | orchestrator | 2025-05-06 01:55:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:55:46.512455 | orchestrator | 2025-05-06 01:55:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:55:49.560052 | orchestrator | 2025-05-06 01:55:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:55:49.560191 | orchestrator | 2025-05-06 01:55:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:55:52.608380 | orchestrator | 2025-05-06 01:55:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:55:52.608523 | orchestrator | 2025-05-06 01:55:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:55:55.658705 | orchestrator | 2025-05-06 01:55:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:55:55.658884 | orchestrator | 2025-05-06 01:55:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:55:58.707647 | orchestrator | 2025-05-06 01:55:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:55:58.707838 | orchestrator | 2025-05-06 01:55:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:56:01.747928 | orchestrator | 2025-05-06 01:55:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:56:01.748097 | orchestrator | 2025-05-06 01:56:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:56:04.804684 | orchestrator | 2025-05-06 01:56:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:56:04.804851 | orchestrator | 2025-05-06 01:56:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:56:07.854158 | orchestrator | 2025-05-06 01:56:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:56:07.854337 | orchestrator | 2025-05-06 01:56:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:56:10.908472 | orchestrator | 2025-05-06 01:56:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:56:10.908687 | orchestrator | 2025-05-06 01:56:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:56:13.953880 | orchestrator | 2025-05-06 01:56:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:56:13.954095 | orchestrator | 2025-05-06 01:56:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:56:17.005190 | orchestrator | 2025-05-06 01:56:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:56:17.005348 | orchestrator | 2025-05-06 01:56:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:56:20.059063 | orchestrator | 2025-05-06 01:56:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:56:20.059241 | orchestrator | 2025-05-06 01:56:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:56:23.111461 | orchestrator | 2025-05-06 01:56:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:56:23.111608 | orchestrator | 2025-05-06 01:56:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:56:26.161472 | orchestrator | 2025-05-06 01:56:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:56:26.161623 | orchestrator | 2025-05-06 01:56:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:56:29.213173 | orchestrator | 2025-05-06 01:56:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:56:29.213313 | orchestrator | 2025-05-06 01:56:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:56:32.263217 | orchestrator | 2025-05-06 01:56:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:56:32.263372 | orchestrator | 2025-05-06 01:56:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:56:35.310356 | orchestrator | 2025-05-06 01:56:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:56:35.310497 | orchestrator | 2025-05-06 01:56:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:56:38.360985 | orchestrator | 2025-05-06 01:56:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:56:38.361168 | orchestrator | 2025-05-06 01:56:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:56:41.404468 | orchestrator | 2025-05-06 01:56:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:56:41.404740 | orchestrator | 2025-05-06 01:56:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:56:44.451145 | orchestrator | 2025-05-06 01:56:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:56:44.451298 | orchestrator | 2025-05-06 01:56:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:56:47.496968 | orchestrator | 2025-05-06 01:56:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:56:47.497112 | orchestrator | 2025-05-06 01:56:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:56:50.543406 | orchestrator | 2025-05-06 01:56:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:56:50.543557 | orchestrator | 2025-05-06 01:56:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:56:53.597297 | orchestrator | 2025-05-06 01:56:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:56:53.597442 | orchestrator | 2025-05-06 01:56:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:56:56.648662 | orchestrator | 2025-05-06 01:56:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:56:56.648876 | orchestrator | 2025-05-06 01:56:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:56:59.702310 | orchestrator | 2025-05-06 01:56:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:56:59.702457 | orchestrator | 2025-05-06 01:56:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:57:02.751609 | orchestrator | 2025-05-06 01:56:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:57:02.751814 | orchestrator | 2025-05-06 01:57:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:57:05.803914 | orchestrator | 2025-05-06 01:57:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:57:05.804057 | orchestrator | 2025-05-06 01:57:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:57:05.804964 | orchestrator | 2025-05-06 01:57:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:57:08.854114 | orchestrator | 2025-05-06 01:57:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:57:11.904817 | orchestrator | 2025-05-06 01:57:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:57:11.904971 | orchestrator | 2025-05-06 01:57:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:57:14.955520 | orchestrator | 2025-05-06 01:57:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:57:14.955667 | orchestrator | 2025-05-06 01:57:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:57:18.003477 | orchestrator | 2025-05-06 01:57:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:57:18.003641 | orchestrator | 2025-05-06 01:57:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:57:21.051490 | orchestrator | 2025-05-06 01:57:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:57:21.051680 | orchestrator | 2025-05-06 01:57:21 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:57:24.104567 | orchestrator | 2025-05-06 01:57:21 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:57:24.104707 | orchestrator | 2025-05-06 01:57:24 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:57:27.161281 | orchestrator | 2025-05-06 01:57:24 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:57:27.161436 | orchestrator | 2025-05-06 01:57:27 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:57:30.203115 | orchestrator | 2025-05-06 01:57:27 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:57:30.203261 | orchestrator | 2025-05-06 01:57:30 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:57:33.253066 | orchestrator | 2025-05-06 01:57:30 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:57:33.253227 | orchestrator | 2025-05-06 01:57:33 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:57:36.300461 | orchestrator | 2025-05-06 01:57:33 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:57:36.300610 | orchestrator | 2025-05-06 01:57:36 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:57:39.346357 | orchestrator | 2025-05-06 01:57:36 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:57:39.346512 | orchestrator | 2025-05-06 01:57:39 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:57:42.388916 | orchestrator | 2025-05-06 01:57:39 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:57:42.389064 | orchestrator | 2025-05-06 01:57:42 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:57:45.444954 | orchestrator | 2025-05-06 01:57:42 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:57:45.445108 | orchestrator | 2025-05-06 01:57:45 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:57:48.493247 | orchestrator | 2025-05-06 01:57:45 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:57:48.493388 | orchestrator | 2025-05-06 01:57:48 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:57:51.537805 | orchestrator | 2025-05-06 01:57:48 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:57:51.537978 | orchestrator | 2025-05-06 01:57:51 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:57:54.585040 | orchestrator | 2025-05-06 01:57:51 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:57:54.585191 | orchestrator | 2025-05-06 01:57:54 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:57:57.632412 | orchestrator | 2025-05-06 01:57:54 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:57:57.632553 | orchestrator | 2025-05-06 01:57:57 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:58:00.676740 | orchestrator | 2025-05-06 01:57:57 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:58:00.676879 | orchestrator | 2025-05-06 01:58:00 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:58:03.725542 | orchestrator | 2025-05-06 01:58:00 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:58:03.725692 | orchestrator | 2025-05-06 01:58:03 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:58:06.777781 | orchestrator | 2025-05-06 01:58:03 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:58:06.777967 | orchestrator | 2025-05-06 01:58:06 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:58:09.828527 | orchestrator | 2025-05-06 01:58:06 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:58:09.828692 | orchestrator | 2025-05-06 01:58:09 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:58:12.873531 | orchestrator | 2025-05-06 01:58:09 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:58:12.873680 | orchestrator | 2025-05-06 01:58:12 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:58:15.919877 | orchestrator | 2025-05-06 01:58:12 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:58:15.920061 | orchestrator | 2025-05-06 01:58:15 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:58:18.967847 | orchestrator | 2025-05-06 01:58:15 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:58:18.968027 | orchestrator | 2025-05-06 01:58:18 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:58:22.015253 | orchestrator | 2025-05-06 01:58:18 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:58:22.015438 | orchestrator | 2025-05-06 01:58:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:58:25.060629 | orchestrator | 2025-05-06 01:58:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:58:25.060781 | orchestrator | 2025-05-06 01:58:25 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:58:28.113588 | orchestrator | 2025-05-06 01:58:25 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:58:28.113729 | orchestrator | 2025-05-06 01:58:28 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:58:31.160071 | orchestrator | 2025-05-06 01:58:28 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:58:31.160219 | orchestrator | 2025-05-06 01:58:31 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:58:34.211736 | orchestrator | 2025-05-06 01:58:31 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:58:34.211887 | orchestrator | 2025-05-06 01:58:34 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:58:37.260201 | orchestrator | 2025-05-06 01:58:34 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:58:37.260381 | orchestrator | 2025-05-06 01:58:37 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:58:40.310775 | orchestrator | 2025-05-06 01:58:37 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:58:40.310945 | orchestrator | 2025-05-06 01:58:40 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:58:43.353903 | orchestrator | 2025-05-06 01:58:40 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:58:43.354142 | orchestrator | 2025-05-06 01:58:43 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:58:46.398886 | orchestrator | 2025-05-06 01:58:43 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:58:46.399057 | orchestrator | 2025-05-06 01:58:46 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:58:49.453715 | orchestrator | 2025-05-06 01:58:46 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:58:49.453860 | orchestrator | 2025-05-06 01:58:49 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:58:52.494774 | orchestrator | 2025-05-06 01:58:49 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:58:52.494920 | orchestrator | 2025-05-06 01:58:52 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:58:55.548468 | orchestrator | 2025-05-06 01:58:52 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:58:55.548616 | orchestrator | 2025-05-06 01:58:55 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:58:58.599751 | orchestrator | 2025-05-06 01:58:55 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:58:58.599895 | orchestrator | 2025-05-06 01:58:58 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:59:01.653133 | orchestrator | 2025-05-06 01:58:58 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:59:01.653284 | orchestrator | 2025-05-06 01:59:01 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:59:04.710222 | orchestrator | 2025-05-06 01:59:01 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:59:04.710338 | orchestrator | 2025-05-06 01:59:04 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:59:07.756457 | orchestrator | 2025-05-06 01:59:04 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:59:07.756598 | orchestrator | 2025-05-06 01:59:07 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:59:10.800391 | orchestrator | 2025-05-06 01:59:07 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:59:10.800552 | orchestrator | 2025-05-06 01:59:10 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:59:13.849698 | orchestrator | 2025-05-06 01:59:10 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:59:13.849878 | orchestrator | 2025-05-06 01:59:13 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:59:16.898818 | orchestrator | 2025-05-06 01:59:13 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:59:16.898985 | orchestrator | 2025-05-06 01:59:16 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:59:19.948858 | orchestrator | 2025-05-06 01:59:16 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:59:19.949034 | orchestrator | 2025-05-06 01:59:19 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:59:22.996234 | orchestrator | 2025-05-06 01:59:19 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:59:22.996417 | orchestrator | 2025-05-06 01:59:22 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:59:26.046466 | orchestrator | 2025-05-06 01:59:22 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:59:26.046621 | orchestrator | 2025-05-06 01:59:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:59:29.090397 | orchestrator | 2025-05-06 01:59:26 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:59:29.090544 | orchestrator | 2025-05-06 01:59:29 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:59:32.140592 | orchestrator | 2025-05-06 01:59:29 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:59:32.140738 | orchestrator | 2025-05-06 01:59:32 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:59:35.192052 | orchestrator | 2025-05-06 01:59:32 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:59:35.192250 | orchestrator | 2025-05-06 01:59:35 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:59:38.234725 | orchestrator | 2025-05-06 01:59:35 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:59:38.234874 | orchestrator | 2025-05-06 01:59:38 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:59:41.272921 | orchestrator | 2025-05-06 01:59:38 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:59:41.273067 | orchestrator | 2025-05-06 01:59:41 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:59:44.319971 | orchestrator | 2025-05-06 01:59:41 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:59:44.320156 | orchestrator | 2025-05-06 01:59:44 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:59:47.366970 | orchestrator | 2025-05-06 01:59:44 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:59:47.367185 | orchestrator | 2025-05-06 01:59:47 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:59:50.418829 | orchestrator | 2025-05-06 01:59:47 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:59:50.418957 | orchestrator | 2025-05-06 01:59:50 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:59:53.474974 | orchestrator | 2025-05-06 01:59:50 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:59:53.475217 | orchestrator | 2025-05-06 01:59:53 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:59:56.522007 | orchestrator | 2025-05-06 01:59:53 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:59:56.522239 | orchestrator | 2025-05-06 01:59:56 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 01:59:59.570362 | orchestrator | 2025-05-06 01:59:56 | INFO  | Wait 1 second(s) until the next check 2025-05-06 01:59:59.570508 | orchestrator | 2025-05-06 01:59:59 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 02:00:02.617328 | orchestrator | 2025-05-06 01:59:59 | INFO  | Wait 1 second(s) until the next check 2025-05-06 02:00:02.617482 | orchestrator | 2025-05-06 02:00:02 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 02:00:05.667634 | orchestrator | 2025-05-06 02:00:02 | INFO  | Wait 1 second(s) until the next check 2025-05-06 02:00:05.667769 | orchestrator | 2025-05-06 02:00:05 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 02:00:08.717214 | orchestrator | 2025-05-06 02:00:05 | INFO  | Wait 1 second(s) until the next check 2025-05-06 02:00:08.717362 | orchestrator | 2025-05-06 02:00:08 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 02:00:11.768473 | orchestrator | 2025-05-06 02:00:08 | INFO  | Wait 1 second(s) until the next check 2025-05-06 02:00:11.768624 | orchestrator | 2025-05-06 02:00:11 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 02:00:14.817078 | orchestrator | 2025-05-06 02:00:11 | INFO  | Wait 1 second(s) until the next check 2025-05-06 02:00:14.817253 | orchestrator | 2025-05-06 02:00:14 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 02:00:17.860729 | orchestrator | 2025-05-06 02:00:14 | INFO  | Wait 1 second(s) until the next check 2025-05-06 02:00:17.860927 | orchestrator | 2025-05-06 02:00:17 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 02:00:20.905354 | orchestrator | 2025-05-06 02:00:17 | INFO  | Wait 1 second(s) until the next check 2025-05-06 02:00:20.905482 | orchestrator | 2025-05-06 02:00:20 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 02:00:23.951558 | orchestrator | 2025-05-06 02:00:20 | INFO  | Wait 1 second(s) until the next check 2025-05-06 02:00:23.951704 | orchestrator | 2025-05-06 02:00:23 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 02:00:23.951792 | orchestrator | 2025-05-06 02:00:23 | INFO  | Wait 1 second(s) until the next check 2025-05-06 02:00:26.998652 | orchestrator | 2025-05-06 02:00:26 | INFO  | Task 6bf1245d-e18f-4d09-b4c2-f5227351db01 is in state STARTED 2025-05-06 02:00:28.046363 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-06 02:00:28.056003 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-06 02:00:28.787777 | 2025-05-06 02:00:28.787946 | PLAY [Post output play] 2025-05-06 02:00:28.818370 | 2025-05-06 02:00:28.818515 | LOOP [stage-output : Register sources] 2025-05-06 02:00:28.906003 | 2025-05-06 02:00:28.906286 | TASK [stage-output : Check sudo] 2025-05-06 02:00:29.656839 | orchestrator | sudo: a password is required 2025-05-06 02:00:29.951043 | orchestrator | ok: Runtime: 0:00:00.017814 2025-05-06 02:00:29.968690 | 2025-05-06 02:00:29.968843 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-06 02:00:30.018852 | 2025-05-06 02:00:30.019108 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-06 02:00:30.111086 | orchestrator | ok 2025-05-06 02:00:30.122153 | 2025-05-06 02:00:30.122277 | LOOP [stage-output : Ensure target folders exist] 2025-05-06 02:00:30.580270 | orchestrator | ok: "docs" 2025-05-06 02:00:30.580672 | 2025-05-06 02:00:30.830831 | orchestrator | ok: "artifacts" 2025-05-06 02:00:31.092924 | orchestrator | ok: "logs" 2025-05-06 02:00:31.121540 | 2025-05-06 02:00:31.121719 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-06 02:00:31.164749 | 2025-05-06 02:00:31.165017 | TASK [stage-output : Make all log files readable] 2025-05-06 02:00:31.471445 | orchestrator | ok 2025-05-06 02:00:31.481968 | 2025-05-06 02:00:31.482120 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-06 02:00:31.528898 | orchestrator | skipping: Conditional result was False 2025-05-06 02:00:31.546349 | 2025-05-06 02:00:31.546513 | TASK [stage-output : Discover log files for compression] 2025-05-06 02:00:31.574017 | orchestrator | skipping: Conditional result was False 2025-05-06 02:00:31.593499 | 2025-05-06 02:00:31.593660 | LOOP [stage-output : Archive everything from logs] 2025-05-06 02:00:31.671675 | 2025-05-06 02:00:31.671848 | PLAY [Post cleanup play] 2025-05-06 02:00:31.695526 | 2025-05-06 02:00:31.695676 | TASK [Set cloud fact (Zuul deployment)] 2025-05-06 02:00:31.763218 | orchestrator | ok 2025-05-06 02:00:31.774893 | 2025-05-06 02:00:31.775013 | TASK [Set cloud fact (local deployment)] 2025-05-06 02:00:31.809644 | orchestrator | skipping: Conditional result was False 2025-05-06 02:00:31.826472 | 2025-05-06 02:00:31.826649 | TASK [Clean the cloud environment] 2025-05-06 02:00:32.463166 | orchestrator | 2025-05-06 02:00:32 - clean up servers 2025-05-06 02:00:33.409861 | orchestrator | 2025-05-06 02:00:33 - testbed-manager 2025-05-06 02:00:33.501439 | orchestrator | 2025-05-06 02:00:33 - testbed-node-2 2025-05-06 02:00:33.603292 | orchestrator | 2025-05-06 02:00:33 - testbed-node-1 2025-05-06 02:00:33.695361 | orchestrator | 2025-05-06 02:00:33 - testbed-node-3 2025-05-06 02:00:33.813303 | orchestrator | 2025-05-06 02:00:33 - testbed-node-4 2025-05-06 02:00:33.908264 | orchestrator | 2025-05-06 02:00:33 - testbed-node-5 2025-05-06 02:00:34.006720 | orchestrator | 2025-05-06 02:00:34 - testbed-node-0 2025-05-06 02:00:34.121571 | orchestrator | 2025-05-06 02:00:34 - clean up keypairs 2025-05-06 02:00:34.144774 | orchestrator | 2025-05-06 02:00:34 - testbed 2025-05-06 02:00:34.180736 | orchestrator | 2025-05-06 02:00:34 - wait for servers to be gone 2025-05-06 02:00:41.051643 | orchestrator | 2025-05-06 02:00:41 - clean up ports 2025-05-06 02:00:41.291856 | orchestrator | 2025-05-06 02:00:41 - 2ff8065f-94c4-4615-b287-f98cc34031da 2025-05-06 02:00:41.515036 | orchestrator | 2025-05-06 02:00:41 - 4f635ea9-f231-4607-a090-335c3eebcbff 2025-05-06 02:00:41.765569 | orchestrator | 2025-05-06 02:00:41 - 8449a8d9-78d8-4623-95ea-31315fad994e 2025-05-06 02:00:41.956400 | orchestrator | 2025-05-06 02:00:41 - 88c70c13-dc56-45f9-8a30-8a9e3287c880 2025-05-06 02:00:42.147247 | orchestrator | 2025-05-06 02:00:42 - 8d83ea38-33a9-47e2-ae27-0e5d3aeae834 2025-05-06 02:00:42.494321 | orchestrator | 2025-05-06 02:00:42 - e902e7c8-820f-4e06-bb39-81c3f3fbab6b 2025-05-06 02:00:42.706895 | orchestrator | 2025-05-06 02:00:42 - ee17a604-c8a6-460a-a76e-053862db9a23 2025-05-06 02:00:42.897782 | orchestrator | 2025-05-06 02:00:42 - clean up volumes 2025-05-06 02:00:43.086837 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-2-node-base 2025-05-06 02:00:43.141113 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-0-node-base 2025-05-06 02:00:43.180308 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-1-node-base 2025-05-06 02:00:43.224931 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-4-node-base 2025-05-06 02:00:43.266071 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-manager-base 2025-05-06 02:00:43.303292 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-3-node-base 2025-05-06 02:00:43.345599 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-12-node-0 2025-05-06 02:00:43.387034 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-2-node-2 2025-05-06 02:00:43.438091 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-17-node-5 2025-05-06 02:00:43.483250 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-11-node-5 2025-05-06 02:00:43.522358 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-3-node-3 2025-05-06 02:00:43.566851 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-9-node-3 2025-05-06 02:00:43.609324 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-5-node-base 2025-05-06 02:00:43.651764 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-0-node-0 2025-05-06 02:00:43.700313 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-7-node-1 2025-05-06 02:00:43.741157 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-16-node-4 2025-05-06 02:00:43.783429 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-5-node-5 2025-05-06 02:00:43.826581 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-10-node-4 2025-05-06 02:00:43.868259 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-8-node-2 2025-05-06 02:00:43.912658 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-14-node-2 2025-05-06 02:00:43.956681 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-15-node-3 2025-05-06 02:00:43.999054 | orchestrator | 2025-05-06 02:00:43 - testbed-volume-4-node-4 2025-05-06 02:00:44.039826 | orchestrator | 2025-05-06 02:00:44 - testbed-volume-6-node-0 2025-05-06 02:00:44.079651 | orchestrator | 2025-05-06 02:00:44 - testbed-volume-1-node-1 2025-05-06 02:00:44.125711 | orchestrator | 2025-05-06 02:00:44 - testbed-volume-13-node-1 2025-05-06 02:00:44.167026 | orchestrator | 2025-05-06 02:00:44 - disconnect routers 2025-05-06 02:00:44.270492 | orchestrator | 2025-05-06 02:00:44 - testbed 2025-05-06 02:00:44.981017 | orchestrator | 2025-05-06 02:00:44 - clean up subnets 2025-05-06 02:00:45.019324 | orchestrator | 2025-05-06 02:00:45 - subnet-testbed-management 2025-05-06 02:00:45.155825 | orchestrator | 2025-05-06 02:00:45 - clean up networks 2025-05-06 02:00:45.350918 | orchestrator | 2025-05-06 02:00:45 - net-testbed-management 2025-05-06 02:00:45.606367 | orchestrator | 2025-05-06 02:00:45 - clean up security groups 2025-05-06 02:00:45.643363 | orchestrator | 2025-05-06 02:00:45 - testbed-node 2025-05-06 02:00:45.732142 | orchestrator | 2025-05-06 02:00:45 - testbed-management 2025-05-06 02:00:45.817948 | orchestrator | 2025-05-06 02:00:45 - clean up floating ips 2025-05-06 02:00:45.847038 | orchestrator | 2025-05-06 02:00:45 - 81.163.192.79 2025-05-06 02:00:46.223984 | orchestrator | 2025-05-06 02:00:46 - clean up routers 2025-05-06 02:00:46.312795 | orchestrator | 2025-05-06 02:00:46 - testbed 2025-05-06 02:00:47.388561 | orchestrator | changed 2025-05-06 02:00:47.432446 | 2025-05-06 02:00:47.432571 | PLAY RECAP 2025-05-06 02:00:47.432642 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-06 02:00:47.432670 | 2025-05-06 02:00:47.550048 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-06 02:00:47.553332 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-06 02:00:48.261562 | 2025-05-06 02:00:48.261738 | PLAY [Base post-fetch] 2025-05-06 02:00:48.291374 | 2025-05-06 02:00:48.291521 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-06 02:00:48.358090 | orchestrator | skipping: Conditional result was False 2025-05-06 02:00:48.372896 | 2025-05-06 02:00:48.373076 | TASK [fetch-output : Set log path for single node] 2025-05-06 02:00:48.437833 | orchestrator | ok 2025-05-06 02:00:48.447933 | 2025-05-06 02:00:48.448072 | LOOP [fetch-output : Ensure local output dirs] 2025-05-06 02:00:48.933425 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/e6b26d2a336d434bb99c7a10a0588d88/work/logs" 2025-05-06 02:00:49.207789 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/e6b26d2a336d434bb99c7a10a0588d88/work/artifacts" 2025-05-06 02:00:49.469779 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/e6b26d2a336d434bb99c7a10a0588d88/work/docs" 2025-05-06 02:00:49.496707 | 2025-05-06 02:00:49.496882 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-06 02:00:50.301217 | orchestrator | changed: .d..t...... ./ 2025-05-06 02:00:50.301561 | orchestrator | changed: All items complete 2025-05-06 02:00:50.301609 | 2025-05-06 02:00:50.919712 | orchestrator | changed: .d..t...... ./ 2025-05-06 02:00:51.512049 | orchestrator | changed: .d..t...... ./ 2025-05-06 02:00:51.539329 | 2025-05-06 02:00:51.539478 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-06 02:00:51.578072 | orchestrator | skipping: Conditional result was False 2025-05-06 02:00:51.584735 | orchestrator | skipping: Conditional result was False 2025-05-06 02:00:51.638403 | 2025-05-06 02:00:51.638535 | PLAY RECAP 2025-05-06 02:00:51.638592 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-06 02:00:51.638619 | 2025-05-06 02:00:51.758712 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-06 02:00:51.767021 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-06 02:00:52.471466 | 2025-05-06 02:00:52.471683 | PLAY [Base post] 2025-05-06 02:00:52.500982 | 2025-05-06 02:00:52.501126 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-06 02:00:53.591472 | orchestrator | changed 2025-05-06 02:00:53.629483 | 2025-05-06 02:00:53.629644 | PLAY RECAP 2025-05-06 02:00:53.629713 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-06 02:00:53.629778 | 2025-05-06 02:00:53.747265 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-06 02:00:53.755496 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-06 02:00:54.558249 | 2025-05-06 02:00:54.558457 | PLAY [Base post-logs] 2025-05-06 02:00:54.576296 | 2025-05-06 02:00:54.576487 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-06 02:00:55.095265 | localhost | changed 2025-05-06 02:00:55.101388 | 2025-05-06 02:00:55.101564 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-06 02:00:55.144033 | localhost | ok 2025-05-06 02:00:55.153867 | 2025-05-06 02:00:55.154010 | TASK [Set zuul-log-path fact] 2025-05-06 02:00:55.184060 | localhost | ok 2025-05-06 02:00:55.198715 | 2025-05-06 02:00:55.199018 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-06 02:00:55.243228 | localhost | ok 2025-05-06 02:00:55.253083 | 2025-05-06 02:00:55.253292 | TASK [upload-logs : Create log directories] 2025-05-06 02:00:55.795236 | localhost | changed 2025-05-06 02:00:55.804573 | 2025-05-06 02:00:55.804773 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-06 02:00:56.344224 | localhost -> localhost | ok: Runtime: 0:00:00.009283 2025-05-06 02:00:56.355992 | 2025-05-06 02:00:56.356193 | TASK [upload-logs : Upload logs to log server] 2025-05-06 02:00:56.943949 | localhost | Output suppressed because no_log was given 2025-05-06 02:00:56.947349 | 2025-05-06 02:00:56.947467 | LOOP [upload-logs : Compress console log and json output] 2025-05-06 02:00:57.022273 | localhost | skipping: Conditional result was False 2025-05-06 02:00:57.032130 | localhost | skipping: Conditional result was False 2025-05-06 02:00:57.044536 | 2025-05-06 02:00:57.044712 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-06 02:00:57.110484 | localhost | skipping: Conditional result was False 2025-05-06 02:00:57.110979 | 2025-05-06 02:00:57.124362 | localhost | skipping: Conditional result was False 2025-05-06 02:00:57.138130 | 2025-05-06 02:00:57.138285 | LOOP [upload-logs : Upload console log and json output]